<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Mattia Setzu</style></author><author><style face="normal" font="default" size="100%">Riccardo Guidotti</style></author><author><style face="normal" font="default" size="100%">Anna Monreale</style></author><author><style face="normal" font="default" size="100%">Franco Turini</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Cellier, Peggy</style></author><author><style face="normal" font="default" size="100%">Driessens, Kurt</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Global Explanations with Local Scoring</style></title><secondary-title><style face="normal" font="default" size="100%">Machine Learning and Knowledge Discovery in Databases</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year><pub-dates><date><style  face="normal" font="default" size="100%">2020//</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-030-43823-4_14</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer International Publishing</style></publisher><pub-location><style face="normal" font="default" size="100%">Cham</style></pub-location><isbn><style face="normal" font="default" size="100%">978-3-030-43823-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Artificial Intelligence systems often adopt machine learning models encoding complex algorithms with potentially unknown behavior. As the application of these “black box” models grows, it is our responsibility to understand their inner working and formulate them in human-understandable explanations. To this end, we propose a rule-based model-agnostic explanation method that follows a local-to-global schema: it generalizes a global explanation summarizing the decision logic of a black box starting from the local explanations of single predicted instances. We define a scoring system based on a rule relevance score to extract global explanations from a set of local explanations in the form of decision rules. Experiments on several datasets and black boxes show the stability, and low complexity of the global explanations provided by the proposed solution in comparison with baselines and state-of-the-art global explainers.</style></abstract></record></records></xml>