ECML PKDD International Workshop and Tutorial on

eXplainable Knowledge Discovery in Data Mining

Virtual Event, Monday 13th September 2021

Attending the workshop

The XKDD 2021 workshop will be held online as a Zoom Webinar. You can join at the following link:

You can join and interact with the other participant using the following invitation link for Slack: Slack logo

Call for Papers

In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like for example credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has a big potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, reducing accountability. Unfortunately, these risks arise in different applications and they are made even more serious and subtly by the opacity of recent decision support systems, which often are complex and their internal logic is usually inaccessible to humans.

Nowadays most of the Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need of ethics in AI is supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain ``meaningful explanations of the logic involved'' when automated decision making takes place, the ``ACM Statement on Algorithmic Transparency and Accountability'', the Informatics Europe's ``European Recommendations on Machine-Learned Automated Decision Making'' and ``The ethics guidelines for trustworthy AI'' provided by the EU High-Level Expert Group on AI.

The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains.

The purpose of XKDD, eXaplaining Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. XKDD is an event organized into two moments: a tutorial to introduce audience to the topic, and a workshop to discuss recent advances in the research field. The tutorial will provide a broad overview of the state of the art on the major applications for explainable and transparent approaches and their relationship with fairness and privacy. Moreover, it will present Python/R libraries that practically shows how explainability and fairness tasks can be addressed. The workshop will seek top-quality submissions addressing uncovered important issues related to ethical, fair, explainable and transparent data mining and machine learning. Papers should present research results in any of the topics of interest for the workshop as well as application experiences, tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view, but also from a legal, ethical or sociological perspective.

Topics of interest include, but are not limited to:

Submissions with a focus on fairness and with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between ML, AI, infovis, man-machine interfaces, psychology, etc. Research driven by application cases where interpretability matters are also of our interest, e.g., medical applications, decision systems in law and administrations, industry 4.0, etc.

The call for paper can be dowloaded here.


Electronic submissions will be handled via Easychair.

Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).

The maximum length of either research or position papers is 16 pages in this format. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).

Authors who submit their work to XKDD 2021 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2021 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop (either digitally or in presence depending on how the situation evolves). Pre-proceedings will be available online before the workshop. A special issue of a relevant international journal with extended versions of selected papers is under consideration.

All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.

All papers for XKDD 2021 must be submitted by using the on-line submission system at Easychair.

Important Dates

  • Paper Submission deadline: June 23 July 7, 2021
  • Accept/Reject Notification: July 10 July 21, 2021
  • Camera-ready deadline: July 31 Aug 14, 2021
  • Workshop: September 13, 2021


Program Chairs

Invited Speakers

Prof. Dr. Andreas HOLZINGER

Institute for Medical Informatics, Statistics & Documentation Medical University Graz, Austria
Andreas Holzinger works on Human-Centered AI (HCAI), motivated by efforts to improve human health. Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements, he was elected as a member of Academia Europea in 2019. Andreas is paving the way towards multimodal causability, promoting robust interpretable machine learning, and advocating for a synergistic approach to put the human-in-control of AI and align AI with human values, privacy, security, and safety.

Program Committee


Welcome and General Overview


Motivations for Explainability and Links to Other Ethical Aspects

Anna Monreale

Explaining and Checking Fairness for Predictive Models

Przemyslaw Biecek

Coffee Break 30 minutes

Useful Explanations and How to Find Them

Riccardo Guidotti

Explaining with LIME and LORE Wrapped in the X-Lib Library

Salvatore Rinzivillo

Lunch Break 60 minutes


Workshop - Keynote Talk
Ingredients of future medical AI: explainability and robustness

Prof. Dr. Andreas HOLZINGER

Workshop - Research Paper presentation
Explanations for Network Embedding-based Link Predictions

Bo Kang, Jefrey Lijffijt and Tijl De Bie


Workshop - Research Paper presentation
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

Meike Nauta, Annemarie Jutte, Jesper Provoost and Christin Seifert


Workshop - Research Paper presentation
Prototypical Convolutional Neural Network for a phrase-based explanation of sentiment classification

Kamil Pluciński, Mateusz Lango and Jerzy Stefanowski


Workshop - Research Paper presentation
Exploring counterfactual explanations for classification and regression trees

Suryabhan Singh Hada and Miguel Carreira-Perpinan


Coffee Break 30 minutes

Workshop - Research Paper presentation
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens and Marcin Detyniecki

Workshop - Research Paper presentation
Using Explainable Boosting Machines (EBMs) to Detect Common Flaws in Data

Zhi Chen, Sarah Tan, Harsha Nori, Kori Inkpen, Yin Lou and Rich Caruana

Workshop - Research Paper presentation
Towards explainable meta-learning

Katarzyna Woźnica and Przemysław Biecek




The XKDD workshop will be held online as TBD


This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 952215 TAILOR.

This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, grant agreement 952026 HumanE-AI-Net.

This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-2019-1: Research Infrastructures, grant agreement 871042 SoBigData++.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 825619 AI4EU.


All inquires should be sent to