ECML PKDD Joint International Workshop on

Advances in Interpretable Machine Learning and Artificial Intelligence &
eXplainable Knowledge Discovery in Data Mining

Würzburg, Germany, Friday 20th September 2019

Call for Papers

In the past decade, we have witnessed the increasing deployment of powerful automated decision-making systems in settings ranging from control of safety-critical systems to face detection on mobile phone cameras. Albeit remarkably powerful in solving complex tasks, these systems are typically completely obscure, i.e., they do not provide any mechanism to understand and explore their behavior and the reasons underlying the decisions taken.

This opaqueness can raise legal, ethical and practical concerns, which have led to initiatives and recommendations on how to address these problems, calling for higher scrutiny in the deployment of automated decision-making systems. These include the “ACM Statement on Algorithmic Transparency and Accountability”, the “European Recommendations on Machine-Learned Automated Decision Making”, and the EU's General Data Protection Regulation (GDPR). The latter suggests in one of its clauses that individuals should be able to obtain explanations of the decisions proposed by automated processing, and to challenge those decisions. On the other hand, recent studies have shown that models learning from data can be attacked to intentionally provide wrong decisions via generated adversarial data. Massive challenges are still open and must be addressed to ensure that automated decision making can be accountably deployed, and that the resulting systems can be trusted. All this calls for joint efforts across technical, legal, sociological, and ethical domains to address these increasingly pressing issues.

The purpose of AIMLAI-XKDD (Advances in Interpretable Machine Learning and Artificial Intelligence & eXplainable Knowledge Discovery in Data Mining), is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining, machine learning, artificial intelligence. AIMLAI-XKDD is an event organized into two moments: a tutorial to introduce audience to the topic, and a workshop to discuss recent advances in the research field. The tutorial will provide a broad overview of the state of the art and the major applications for explainable and transparent approaches. Likewise it will highlight the main open challenges. The workshop will seek top-quality submissions addressing uncovered important issues related to explainable and interpretable data mining and machine learning models. Papers should present research results in any of the topics of interest for the workshop as well as application experiences, tools and promising preliminary ideas. AIMLAI-XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view, but also from a legal, ethical or sociological perspective. Besides the central topic of interpretable algorithms and explanation methods, we also welcome submissions that answer research questions like "how to measure and evaluate interpretability and explainability?" and "how to integrate humans in the machine learning pipeline for interpretability purposes?".

Topics of interest include, but are not limited to: TOPICS

Submissions with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between ML, AI, infovis, man-machine interfaces, psychology, etc. Research driven by application cases where interpretability matters are also of our interest, e.g., medical applications, decision systems in law and administrations, industry 4.0, etc.


Electronic submissions will be handled via Easychair.

Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).

The maximum length of either research or position papers is 12 pages in this format. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).

Authors who submit their work to AIMLAI-XKDD 2019 commit themselves to present their paper at the workshop in case of acceptance. AIMLAI-XKDD 2019 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop. A special issue of a relevant international journal with extended versions of selected papers is under consideration.

All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.

All papers for AIMLAI-XKDD 2019 must be submitted by using the on-line submission system at Easychair.

Important Dates

  • Paper Submission deadline: June 7th, 2019 June 14th, 2019
  • Accept/Reject Notification: July 19th, 2019
  • Camera-ready deadline: July 26th, 2019
  • Workshop: September 20th, 2019


Tutorial Program Chairs

Workshop Program Chairs

Program Committee

  • Avishek Anand, Leibniz University, Germany
  • Michaël Aupetit, QCRI, Qatar
  • Livio Bioglio, University of Turin, Italy
  • Tijl De Bie, Ghent University, Belgium
  • Daniele Dell'Aglio, University of Zurich, Switzerland
  • Rich Caruana, Microsoft Research, USA
  • Alex Freitas, University of Kent, UK
  • Johannes Fürnkranz, TU Darmstadt, Germany
  • Sébastien Gambs, University of Quebec, Canada
  • Aristides Gionis, Aalto University, Finland
  • Thomas Guyet, Agro Campus Rennes/IRISA, France
  • Barbara Hammer, Bielefeld University, Germany
  • Himabindu Lakkaraju, Stanford University, USA
  • John A. Lee, UCLouvain, Belgium
  • Paulo Lisboa, Liverpool John Moores University, UK
  • Daniel Keim, Konstanz University, Germany
  • Véronique Masson, Univ Rennes/IRISA, France
  • Christoph Molnar, LMU, Munich, Germany
  • Amedeo Napoli, CNRS, France
  • Cecilia Panigutti, Scuola Normale Superiore, Italy
  • Roberto Pellungrini, University of Pisa, Italy
  • Philippe Preux, University of Lille, France
  • Forough Poursabzi-Sangdeh, Microsoft Research, USA
  • Laurence Rozé, INSA Rennes/IRISA, France
  • Mattia Setzu, University of Pisa, Italy
  • Marc Shoenauer, INRIA Saclay, France
  • Sameer Singh, University of California, USA
  • Alexandre Termier, University of Rennes/IRISA, France
  • Vincenc Torra, IIIA-CSIC, Spain
  • Grigorios Tsoumakas, Aristotle University of Thessaloniki, Greece
  • Cagatay Turkay, University of London, UK
  • Berk Ustun, Harvard University, USA
  • Gilles Venturini, Université de Tours, France
  • Emmanuel Vincent, INRIA Nancy, France
  • Jean-Daniel Zucker, IRD, France


The Tutorial

The slides are available and can be downloaded here.

Welcome and General Overview Anna Monreale, University of Pisa

Explaining Explanation Methods Riccardo Guidotti, ISTI-CNR

Explaining with Knowledge Graphs Pasquale Minervini, University College London

Explaining Privacy Risks Anna Monreale, University of Pisa

Visualizing Explanations Anna Monreale, University of Pisa

Conclusions and Q&A

Lunch break

The Workshop

Keynote talk Michael Sedlmair

Effect of Superpixel Aggregation on Explanations in LIME - A Case Study with Biological Data

Enriching Visual with Verbal Explanations for Relational Concepts - Combining LIME with Aleph

Local Interpretation Methods to Machine Learning Using the Domain of the Feature Space

Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations

Coffee break

Poster Session

Adversarial Robustness Curvess

LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding

Feedback from workshop participants and conclusion


The event will take place at the ECML PKDD Conference at the Hubland campus of the University of Würzburg, which is placed on a hill outside the historic city center.


This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 788352 Pro-Res.

This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, HumanE-AI.

This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-1-2014-2015: Research Infrastructures, grant agreement 654024 SoBigData.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 825619 AI4EU.


All inquires should be sent to