TY - CONF T1 - Meaningful explanations of Black Box AI decision systems T2 - Proceedings of the AAAI Conference on Artificial Intelligence Y1 - 2019 A1 - Dino Pedreschi A1 - Fosca Giannotti A1 - Riccardo Guidotti A1 - Anna Monreale A1 - Salvatore Ruggieri A1 - Franco Turini AB - Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box. JF - Proceedings of the AAAI Conference on Artificial Intelligence UR - https://aaai.org/ojs/index.php/AAAI/article/view/5050 ER -