Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions.
The project, funded by an ERC Advanced Grant awarded to Fosca Giannotti, focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-to-global framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of expressive logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. An intertwined line of research will investigate (i) causal explanations, i.e., models that capture the causal relationships among the (endogenous and exogenous) variables and the decision, and (ii) mechanistic/physical models that capture the detailed data generation behavior behind specific deep learning models, by means of the tools of statistical physics of complex systems. It will also develop: (1) an explanation infrastructure for benchmarking the methods developed within and outside my project, equipped with platforms for the users’ assessment of the explanations and the crowdsensing of observational decision data; (2) an ethical-legal framework, both for compliance and impact of our developed methods on current legal standards and on the “right of explanation” provisions of the GDPR; and (3) a repertoire of case studies in explanation-by-design, with a priority in health and fraud detection applications.
I sistemi di Intelligenza artificiale suggeriscono decisioni sulla base di pattern e regole imparate dai dati. I dati registrano l’esperienza passata e quindi contengono tutto il bene e tutto il male dell’esperienza.
Against the background of the Covid-19 pandemic, which proves to provide fertile ground to intensify the ‘information disorder’ characterised by conspiracy theories and ‘alternative facts’, it is vital to underline the relevance of science and the
The ERC Advanced Grant XAI “Science & technology for the eXplanation of AI decision making”, led by Fosca Giannotti of the Italian CNR, in collaboration with the PhD program in “Data Science” by Scuola Normale Superiore in Pisa, invites applic