On The Stability of Interpretable Models

You are here

TitleOn The Stability of Interpretable Models
Publication TypeConference Paper
Year of Publication2019
AuthorsGuidotti, R, Ruggieri, S
Conference Name2019 International Joint Conference on Neural Networks (IJCNN)
AbstractInterpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent. When considered in isolation, a decision tree, a set of classification rules, or a linear model, are widely recognized as human-interpretable. However, such models are generated as part of a larger analytical process. Bias in data collection and preparation, or in model's construction may severely affect the accountability of the design process. We conduct an experimental study of the stability of interpretable models with respect to feature selection, instance selection, and model selection. Our conclusions should raise awareness and attention of the scientific community on the need of a stability impact assessment of interpretable models.
PDF icon ijcnn2019stability.pdf847.98 KB
Research Project: