GLocalX - From Local to Global Explanations of Black Box AI Models

You are here

TitleGLocalX - From Local to Global Explanations of Black Box AI Models
Publication TypeJournal Article
Year of Publication2021
AuthorsSetzu, M, Guidotti, R, Monreale, A, Turini, F, Pedreschi, D, Giannotti, F
Volume294
Pagination103457
Date Published2021/05/01/
ISBN Number0004-3702
Abstract

Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLocalX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLocalX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLocalX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLocalX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.

URLhttps://www.sciencedirect.com/science/article/pii/S0004370221000084
DOI10.1016/j.artint.2021.103457
Short TitleArtificial Intelligence
Research Project: