TY - JOUR T1 - Factual and Counterfactual Explanations for Black Box Decision Making JF - IEEE Intelligent Systems Y1 - 2019 A1 - Riccardo Guidotti A1 - Anna Monreale A1 - Fosca Giannotti A1 - Dino Pedreschi A1 - Salvatore Ruggieri A1 - Franco Turini AB - The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box. UR - https://ieeexplore.ieee.org/abstract/document/8920138 ER -