MALOTEC Seminar - Riccardo Guidotti: Evaluating Local Explanation Methods on Ground Truth

You are here

MALOTEC Seminar - Riccardo Guidotti: Evaluating Local Explanation Methods on Ground Truth 09/04/2021

Riccardo is an Assistant Professor at the Department of Computer Science, University of Pisa, and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015, and the DSAA New Generation Data Scientist Award 2018. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data.

This video is the recording of his MALOTEC Seminar of April 9th 2021.

Title: Evaluating Local Explanation Methods on Ground Truth

Abstract: Evaluating local explanation methods is a difficult task due to the lack of a shared and universally accepted definition of explanation. In the literature, one of the most common ways to assess the performance of an explanation method is to measure the fidelity of the explanation with respect to the classification of a black box model adopted by an Artificial Intelligent system for making a decision. However, this kind of evaluation only measures the degree of adherence of the local explainer in reproducing the behavior of the black box classifier with respect to the final decision. Therefore, the explanation provided by the local explainer could be different in the content even though it leads to the same decision of the AI system. We propose an approach that allows to measure to which extent the explanations returned by local explanation methods are correct with respect to a synthetic ground truth explanation. Indeed, the proposed methodology enables the generation of synthetic transparent classifiers for which the reason for the decision taken, i.e., a synthetic ground truth explanation, is available by design. Experimental results show how the proposed approach allows to easily evaluate local explanations on the ground truth and to characterize the quality of local explanation methods.

More info about the event:

Related projects