Science and technology for the eXplanation of AI decision making

You are here

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions.
The project, funded by an ERC Advanced Grant awarded to Fosca Giannotti, focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-to-global framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of expressive logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. An intertwined line of research will investigate (i) causal explanations, i.e., models that capture the causal relationships among the (endogenous and exogenous) variables and the decision, and (ii) mechanistic/physical models that capture the detailed data generation behavior behind specific deep learning models, by means of the tools of statistical physics of complex systems. It will also develop: (1) an explanation infrastructure for benchmarking the methods developed within and outside my project, equipped with platforms for the users’ assessment of the explanations and the crowdsensing of observational decision data; (2) an ethical-legal framework, both for compliance and impact of our developed methods on current legal standards and on the “right of explanation” provisions of the GDPR; and (3) a repertoire of case studies in explanation-by-design, with a priority in health and fraud detection applications.

Pedreschi, D., F. Giannotti, R. Guidotti, A. Monreale, S. Ruggieri, and F. Turini, "Meaningful explanations of Black Box AI decision systems", Proceedings of the AAAI Conference on Artificial Intelligence, 2019.
Panigutti, C., R. Guidotti, A. Monreale, and D. Pedreschi, "Explaining multi-label black-box classifiers for health applications", International Workshop on Health Intelligence: Springer, 2019.
Guidotti, R., A. Monreale, and L. Cariaggi, "Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers", Pacific-Asia Conference on Knowledge Discovery and Data Mining: Springer, 2019.
Guidotti, R., and S. Ruggieri, "On The Stability of Interpretable Models", 2019 International Joint Conference on Neural Networks (IJCNN): IEEE, 2019.
Guidotti, R., A. Monreale, and L. Cariaggi, "Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers", Pacific-Asia Conference on Knowledge Discovery and Data Mining: Springer, 2019.
Guidotti, R., and S. Ruggieri, "On The Stability of Interpretable Models", 2019 International Joint Conference on Neural Networks (IJCNN): IEEE, 2019.
Guidotti, R., A. Monreale, and D. Pedreschi, "The AI black box Explanation Problem", ERCIM NEWS, no. 116, pp. 12–13, 2019.
Guidotti, R., A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, "A survey of methods for explaining black box models", ACM computing surveys (CSUR), vol. 51, no. 5, pp. 93, 2018.

FALLING WALLS CIRCLE TABLE: UNDERSTANDING THE SCIENTIFIC METHOD IN THE 21ST CENTURY

Against the background of the Covid-19 pandemic, which proves to provide fertile ground to intensify the ‘information disorder’ characterised by conspiracy theories and ‘alternative facts’, it is vital to underline the relevance of science and the

The ERC Advanced Grant XAI “Science & technology for the eXplanation of AI decision making”, led by Fosca Giannotti of the Italian CNR, in collaboration with the PhD program in “Data Science” by Scuola Normale Superiore in Pisa, invites applic

None
Acronym
XAI
Web Site
Start Date
30 September 2019
End Date
30 September 2024
Funded
Type
Area
Affiliation
Department of Computer Science, University of Pisa (DI-UNIPI)
Istituto di Scienza e Tecnologie dell’Informazione, National Research Council of Italy (ISTI-CNR)