In the past decade, we have witnessed the increasing deployment of powerful automated decision-making systems in settings ranging from control of safety-critical systems to face detection on mobile phone cameras. Albeit remarkably powerful in solving complex tasks, these systems are typically completely obscure, i.e., they do not provide any mechanism to understand and explore their behavior and the reasons underlying the decisions taken.
This opaqueness can raise legal, ethical and practical concerns, which have led to initiatives and recommendations on how to address these problems, calling for higher scrutiny in the deployment of automated decision-making systems. These include the “ACM Statement on Algorithmic Transparency and Accountability”, the “European Recommendations on Machine-Learned Automated Decision Making”, and the EU's General Data Protection Regulation (GDPR). The latter suggests in one of its clauses that individuals should be able to obtain explanations of the decisions proposed by automated processing, and to challenge those decisions. On the other hand, recent studies have shown that models learning from data can be attacked to intentionally provide wrong decisions via generated adversarial data. Massive challenges are still open and must be addressed to ensure that automated decision making can be accountably deployed, and that the resulting systems can be trusted. All this calls for joint efforts across technical, legal, sociological, and ethical domains to address these increasingly pressing issues.
The purpose of AIMLAI-XKDD (Advances in Interpretable Machine Learning and Artificial Intelligence & eXplainable Knowledge Discovery in Data Mining), is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining, machine learning, artificial intelligence. AIMLAI-XKDD is an event organized into two moments: a tutorial to introduce audience to the topic, and a workshop to discuss recent advances in the research field. The tutorial will provide a broad overview of the state of the art and the major applications for explainable and transparent approaches. Likewise it will highlight the main open challenges. The workshop will seek top-quality submissions addressing uncovered important issues related to explainable and interpretable data mining and machine learning models. Papers should present research results in any of the topics of interest for the workshop as well as application experiences, tools and promising preliminary ideas. AIMLAI-XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view, but also from a legal, ethical or sociological perspective. Besides the central topic of interpretable algorithms and explanation methods, we also welcome submissions that answer research questions like "how to measure and evaluate interpretability and explainability?" and "how to integrate humans in the machine learning pipeline for interpretability purposes?".
Topics of interest include, but are not limited to: TOPICS
Submissions with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between ML, AI, infovis, man-machine interfaces, psychology, etc. Research driven by application cases where interpretability matters are also of our interest, e.g., medical applications, decision systems in law and administrations, industry 4.0, etc.
Electronic submissions will be handled via Easychair.
Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).
The maximum length of either research or position papers is 12 pages in this format. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).
Authors who submit their work to AIMLAI-XKDD 2019 commit themselves to present their paper at the workshop in case of acceptance. AIMLAI-XKDD 2019 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.
Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop. A special issue of a relevant international journal with extended versions of selected papers is under consideration.
All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.
All papers for AIMLAI-XKDD 2019 must be submitted by using the on-line submission system at Easychair.
The slides are available and can be downloaded here.
The event will take place at the ECML PKDD Conference at the Hubland campus of the University of Würzburg, which is placed on a hill outside the historic city center.
This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 788352 Pro-Res.
This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, HumanE-AI.
This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-1-2014-2015: Research Infrastructures, grant agreement 654024 SoBigData.
This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 825619 AI4EU.
All inquires should be sent to