Science and technology for the eXplanation of AI decision making

You are here

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions.
The project, funded by an ERC Advanced Grant awarded to Fosca Giannotti, focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-to-global framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of expressive logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. An intertwined line of research will investigate (i) causal explanations, i.e., models that capture the causal relationships among the (endogenous and exogenous) variables and the decision, and (ii) mechanistic/physical models that capture the detailed data generation behavior behind specific deep learning models, by means of the tools of statistical physics of complex systems. It will also develop: (1) an explanation infrastructure for benchmarking the methods developed within and outside my project, equipped with platforms for the users’ assessment of the explanations and the crowdsensing of observational decision data; (2) an ethical-legal framework, both for compliance and impact of our developed methods on current legal standards and on the “right of explanation” provisions of the GDPR; and (3) a repertoire of case studies in explanation-by-design, with a priority in health and fraud detection applications.

Naretto, F., R. Pellungrini, A. Monreale, F. Maria Nardini, and M. Musolesi, "Predicting and Explaining Privacy Risk Exposure in Mobility Data", Discovery Science, Cham, Springer International Publishing, 2020//.
Naretto, F., R. Pellungrini, F. Maria Nardini, and F. Giannotti, "Prediction and Explanation of Privacy Risk on Mobility Data with Neural Networks", ECML PKDD 2020 Workshops, Cham, Springer International Publishing, 2020//.
Pedreschi, D., F. Giannotti, R. Guidotti, A. Monreale, S. Ruggieri, and F. Turini, "Meaningful explanations of Black Box AI decision systems", Proceedings of the AAAI Conference on Artificial Intelligence, 2019.
Panigutti, C., R. Guidotti, A. Monreale, and D. Pedreschi, "Explaining multi-label black-box classifiers for health applications", International Workshop on Health Intelligence: Springer, 2019.
Guidotti, R., A. Monreale, and L. Cariaggi, "Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers", Pacific-Asia Conference on Knowledge Discovery and Data Mining: Springer, 2019.
Guidotti, R., and S. Ruggieri, "On The Stability of Interpretable Models", 2019 International Joint Conference on Neural Networks (IJCNN): IEEE, 2019.
Guidotti, R., A. Monreale, and D. Pedreschi, "The AI black box Explanation Problem", ERCIM NEWS, no. 116, pp. 12–13, 2019.

Pages

Perché dobbiamo avere un uso responsabile dell'intelligenza artificiale?

I sistemi di Intelligenza artificiale suggeriscono decisioni sulla base di pattern e regole imparate dai dati. I dati registrano l’esperienza passata e quindi contengono tutto il bene e tutto il male dell’esperienza.

The new national PhD program in Artificial Intelligence is on the launchpad! https://phd-ai.it/en/

Premio Internazionale Tecnovisionarie® 2021

Il premio Le Tecnovisionarie 2021 sul tema Intelligenza Artificiale BigData è stato consegnato a Fosca Giannotti dalla Presidente del CNR con la seguente motivazione:

Benchmarking and Survey of Explanation Methods for Black Box Models | AISC

Rapporteurs: Francesco Bodria, Francesca Naretto; Guest: Muhammad Rehman Zafar

Explainable Machine Learning for Trustworthy AI

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why.

FALLING WALLS CIRCLE TABLE: UNDERSTANDING THE SCIENTIFIC METHOD IN THE 21ST CENTURY

Against the background of the Covid-19 pandemic, which proves to provide fertile ground to intensify the ‘information disorder’ characterised by conspiracy theories and ‘alternative facts’, it is vital to underline the relevance of science and the

The ERC Advanced Grant XAI “Science & technology for the eXplanation of AI decision making”, led by Fosca Giannotti of the Italian CNR, in collaboration with the PhD program in “Data Science” by Scuola Normale Superiore in Pisa, invites applic

Acronym
XAI
Web Site
Start Date
30 September 2019
End Date
30 September 2024
Funded
European Commission
Type
European Project
Area
Affiliation
Department of Computer Science, University of Pisa (DI-UNIPI)
Istituto di Scienza e Tecnologie dell’Informazione, National Research Council of Italy (ISTI-CNR)