ECML PKDD International Workshop on

eXplainable Knowledge Discovery in Data Mining

Monday 19th September 2022

Call for Papers

In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications. They are made even more serious and subtly by the opacity of recent decision support systems, which are often complex and their internal logic is usually inaccessible to humans.

Nowadays, most Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need for ethics in AI are supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain "meaningful explanations of the logic involved" when automated decision making takes place, the "ACM Statement on Algorithmic Transparency and Accountability", the Informatics Europe's "European Recommendations on Machine-Learned Automated Decision Making" and "The ethics guidelines for trustworthy AI" provided by the EU High-Level Expert Group on AI.

The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains.

The purpose of XKDD, eXaplaining Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. Also, this year the workshop will seek submissions addressing uncovered important issues in specific fields related to eXplainable AI (XAI), such as privacy and fairness, application in real case studies, benchmarking, explanation of decision systems based on time series and graphs which are becoming more and more important in nowadays applications. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Papers should present research results in any of the topics of interest for the workshop, as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view but also from a legal, ethical or sociological perspective.

Topics of interest include, but are not limited to:

Submissions with a focus on uncovered important issues related to XAI are particularly welcome, e.g. XAI for fairness checking approaches, XAI for privacy-preserving systems, XAI for federated learning, XAI for time series and graph based approaches, XAI for visualization, XAI in human-machine interaction, benchmarking of XAI methods, and XAI case studies.

The call for paper can be dowloaded here.


Electronic submissions will be handled via Easychair.

Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).

The maximum length of either research or position papers is 16 pages in this format. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).

Authors who submit their work to XKDD 2022 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2022 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop (either digitally or in presence depending on how the situation evolves). Pre-proceedings will be available online before the workshop. A special issue of a relevant international journal with extended versions of selected papers is under consideration.

All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.

All papers for XKDD 2022 must be submitted by using the on-line submission system at Easychair.

Important Dates

  • Paper Submission deadline: June 20 July 4, 2022
  • Accept/Reject Notification: July 13 July 20, 2022
  • Camera-ready deadline: July 31, 2022
  • Workshop: September 19, 2022


Program Chairs

Invited Speakers

Wojciech Samek

Professor at TU Berlin, Head of AI Department at Fraunhofer HHI, Fellow at BIFOLD

From Attribution Maps to Concept-Level Explainable AI

The emerging field of Explainable AI (XAI) aims to bring transparency to today's powerful but opaque deep learning models. While local XAI methods explain individual predictions in form of attribution maps, thereby identifying “where” important features occur (but not providing information about what they represent), global explanation techniques visualize “what” concepts a model has generally learned to encode. Both types of methods thus only provide partial insights and leave the burden of interpreting the model's reasoning to the user. Building on Layer-wise Relevance Propagation (LRP), one of the most popular local XAI techniques, this talk will connect lines of local and global XAI research by introducing Concept Relevance Propagation (CRP), a next-generation XAI technique which explains individual predictions in terms of localized and human-understandable concepts. Other than the related state-of-the-art, CRP answers both the “where” and “what” question, thereby providing deep insights into the model’s reasoning process. In the talk we will demonstrate on multiple datasets, model architectures and application domains, that CRP-based analyses allow one to (1) gain insights into the representation and composition of concepts in the model as well as quantitatively investigate their role in prediction, (2) identify and counteract Clever Hans filters focusing on spurious correlations in the data, and (3) analyze whole concept subspaces and their contributions to fine-grained decision making. By lifting XAI to the concept level, CRP opens up a new way to analyze, debug and interact with ML models, which is of particular interest in safety-critical applications and the sciences
Wojciech Samek is a professor at the Technical University of Berlin and is jointly heading the AI Department at Fraunhofer Heinrich Hertz Institute. Hestudied computer science in Berlin and Edinburgh, was a visiting researcher at the NASA Ames Research Center, Mountain View, USA, and received the Ph.D. degree with distinction from TU Berlin in 2014. He then founded the Machine Learning Group at Fraunhofer HHI, which he headed until 2020. Dr. Samek is Fellow at BIFOLD - Berlin Institute for the Foundation of Learning and Data and associated faculty at the ELLIS Unit Berlin and the DFG Graduate School BIOQIC. Furthermore, he is a senior editor of IEEE TNNLS, an editorial board member of PLoS ONE and Pattern Recognition, and an elected member of the IEEE MLSP Technical Committee. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award, and part of the expert group developing the ISO/IEC MPEG-17 NNR standard. He is the leading editor of the Springer book "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning" (2019), co-editor of the open access Springer book “xxAI – Beyond explainable AI” (2022), and organizer of various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. Dr. Samek has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed by Thomson Reuters as "Highly Cited Papers" (i.e., top 1%) in the field of Engineering.

Anna Monreale

Associate professor at the Computer Science Department of the University of Pisa

The Relationship between Explainability & Privacy in AI

In recent years we are witnessing the diffusion of AI systems based on powerful machine learning models which find application in many critical contexts such as medicine, financial market, credit scoring, etc. In such contexts it is particularly important to design Trustworthy AI systems while guaranteeing interpretability of their decisional reasoning, and privacy protection and awareness. In this talk we will explore the possible relationships between these two relevant ethical values to take into consideration in Trustworthy AI. We will answer research questions such as: how explainability may help privacy awareness? Can explanations jeopardize individual privacy protection?

Anna Monreale is an associate professor at the Computer Science Department of the University of Pisa and a member of the Knowledge Discovery and Data Mining Laboratory (KDD-Lab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. She has been a visiting student at Department of Computer Science of the Stevens Institute of Technology (Hoboken, NewJersey, USA) (2010). Her research interests include big data analytics, social networks and the privacy issues raising in mining these kinds of social and human sensitive data. In particular, she is interested in the evaluation of privacy risks during analytical processes and in the design of privacy-by-design technologies in the era of big data. She earned her Ph.D. in computer science from the University of Pisa in June 2011 and her dissertation was about privacy-by-design in data mining.

Program Committee

  • Leila Amgoud, CNRS, France
  • Francesco Bodria, Scuola Normale Superiore, Italy
  • Umang Bhatt, University of Cambridge, UK
  • Miguel Couceiro, INFRIA, France
  • Menna El-Assady, AI Center of ETH, Switzerland
  • Josep Domingo-Ferrer, Universitat Rovira i Virgili, Spain
  • Françoise Fessant, Orange Labs, France
  • Andreas Holzinger, Medical University of Graz, Austria
  • Thibault Laugel, AXA, France
  • Paulo Lisboa, Liverpool John Moores University, UK
  • Marcin Luckner, Warsaw University of Technology, Poland
  • John Mollas, Aristotle University of Thessaloniki, Greece
  • Amedeo Napoli, CNRS, France
  • Antonio Rago, Imperial College London, UK
  • Jan Ramon, INFRIA, France
  • Xavier Renard, AXA, France
  • Mahtab Sarvmaili, Dalhousie University, Canada
  • Christin Seifert, University of Duisburg-Essen, Germany
  • Udo Schlegel, Konstanz University, Germany
  • Mattia Setzu, University of Pisa, Italy
  • Dominik Slezak, University of Warsaw, Poland
  • Fabrizio Silvestri, Università di Roma, Italy
  • Francesco Spinnato, Scuola Noramle Superiore, Italy
  • Vicenc Torra, Umea University, Sweden
  • Cagatay Turkay, University of Warwick, UK
  • Marco Virgolin, Chalmers University of Technology, Netherlands
  • Martin Jullum, Norwegian Computing Center, Norway
  • Albrecht Zimmermann, Université de Caen, France
  • Guangyi Zhang, KTH Royal Institute of Technology, Sweden


Welcome and General Overview


First Keynote Talk Wojciech Samek

From Attribution Maps to Concept-Level Explainable AI

Research Paper presentation (15 min + 3 min Q&A)

Concept-level Debugging of Part-Prototype Networks

Andrea Bontempelli, Stefano Teso, Fausto Giunchiglia and Andrea Passerini.

GlanceNets: Interpretabile, Leak-proof Concept-based Models

Emanuele Marconato, Andrea Passerini and Stefano Teso.

Coffee Break 30 minutes

Research Paper presentation (15 min + 3 min Q&A)

Is Attention Interpretation? A Quantitative Assessment On Sets

Jonathan Haab, Nicolas Deutschmann and Maria Rodriguez-Martinez.

From Disentangled Representation to Concept Ranking: Interpreting Deep Representations in Image Classification tasks

Eric Ferreira Dos Santos and Alessandra Mileo

RangeGrad: Explaining Neural Networks by Measuring Uncertainty through Bound Propagation

Sam Pinxteren, Marco Favier and Toon Calders.

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

Johannes Jakubik, Jakob Schöffer, Vincent Hoge, Michael Vössing and Niklas Kühl.

Local Multi-Label Explanations for Random Forest

Nikolaos Mylonas, Ioannis Mollas, Nick Bassiliades and Grigorios Tsoumakas

Interpretable and Reliable Rule Classification based on Conformal Prediction

Husam Abdelqader, Evgueni Smirnov, Marc Pont and Marciano Geijselaers.

Lunch Break


Second Keynote Talk Anna Monreale

The Relationship between Explainability & Privacy in AI

Research Paper presentation (15 min + 3 min Q&A)

Measuring the Burden of (Un)fairness Using Counterfactuals

Alejandro Kuratomi, Evaggelia Pitoura, Panagiotis Papapetrou, Tony Lindgren and Panayiotis Tsaparas.

Are SHAP values biased towards high-entropy features?

Raphael Baudeu, Marvin Wright and Markus Loecher.

Simple explanations to summarise Subgroup Discovery outcomes: a case study concerning patient phenotyping

Enrique Valero-Leal, Manuel Campos and Jose M. Juarez.

Limits of XAI task performance evaluation: an e-sport prediction example

Corentin Boidot, Riwal Lefort, Pierre De Loor and Olivier Augereau.

Coffee Break 30 minutes

Research Paper presentation (12 min + 3 min Q&A)

Improving the quality of rule-based GNN explanations

Ataollah Kamal, Elouan Vincent, Marc Plantevit and Celine Robardet.

GREASE: Generate Factual and Counterfactual Explanations for GNN-based Recommendations

Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Zhenhua Huang, Hongshik Ahn and Gabriele Tolomei.

Exposing Racial Dialect Bias in Abusive Language Detection: Can Explainability Play a Role?

Marta Marchiori Manerba and Virginia Morini.

On the Granularity of Explanations in Model Agnostic NLP Interpretability

Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, and Marcin Detyniecki.

SAI Panel


The event will take place at the ECML-PKDD 2022 Conference at the World Trade Center (WTC) Auditorium where the main conference is located and close to the train and bus station within a walking distance from the historic center of the city.

Additional information about the location can be found at
the main conference web page: ECML-PKDD 2022

ECML-PKDD 2022 plans a hybrid organization for workshops. Therefore a person can attend an online event as long as she/he registers for the conference by using the videoconference registration fee: here . Please note the videoconference registration fee also allows to follow the main conference. However, for an in-person event, interactions and discussions are much easier face-to-face. Thus, we believe that it is important that speakers attend in-person workshops to get fruitful events, and we highly encourage authors of submitted papers to plan to participate on-site at the event.


This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 834756 XAI, science and technology for the explanation of ai decision making.

This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, grant agreement 952026 HumanE-AI-Net.

This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-2019-1: Research Infrastructures, grant agreement 871042 SoBigData++.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 952215 TAILOR.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, SAI. CHIST-ERA-19-XAI-010, by MUR (N. not yet available), FWF (N. I 5205), EPSRC (N. EP/V055712/1), NCN (N. 2020/02/Y/ST6/00064), ETAg (N. SLTAT21096), BNSF (N. KP-06-AOO2/5). SAI.


All inquires should be sent to