XKDD 2023 - International Workshop and Tutorial on eXplainable Knowledge Discovery in Data Mining

You are here

Start:
2023/09/18 02:00 Europe/Rome
End:
2023/09/18 02:00 Europe/Rome
Location:
OGR Torino, Italy
Link:
XKDD 2023
Description:

In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications. They are made even more serious and subtly by the opacity of recent decision support systems, which are often complex and their internal logic is usually inaccessible to humans.

Nowadays, most Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need for ethics in AI are supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain ``meaningful explanations of the logic involved'' when automated decision making takes place, the ``ACM Statement on Algorithmic Transparency and Accountability'', the Informatics Europe's ``European Recommendations on Machine-Learned Automated Decision Making'' and ``The ethics guidelines for trustworthy AI'' provided by the EU High-Level Expert Group on AI.

The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains.

The purpose of XKDD, eXaplaining Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. Also, this year the workshop will seek submissions addressing uncovered important issues in specific fields related to eXplainable AI (XAI), such as XAI for a more Social and Responsible AI, XAI as a tool to align AI with human values, XAI for Outlier and Anomaly Detection, quantitative and qualitative evaluation of XAI approaches, and XAI case studies. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Papers should present research results in any of the topics of interest for the workshop, as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view but also from a legal, ethical or sociological perspective.

Topics of interest include, but are not limited to:

  • XAI for Social AI
  • XAI for Responsible AI
  • XAI to Align AI with Human Values
  • XAI for Outlier and Anomaly Detection
  • Quantitative and Qualitative Evaluation of XAI approaches
  • Transparent-by Design Models
  • XAI Case studies
  • XAI for Privacy-Preserving Systems
  • XAI for Federated Learning
  • XAI for Time Series based Approaches
  • XAI for Graph based Approaches
  • XAI for Visualization
  • XAI in Human-Machine Interaction
  • XAI in Human-in-the-Loop Interactions
  • Counterfactual Explanations
  • Human-Model Interfaces for XAI approaches
  • Explainable Artificial Intelligence (XAI)
  • Interpretable Machine Learning
  • Transparent Data Mining
  • XAI for Fairness Checking
  • Explanation, Accountability and Liability from an Ethical and Legal Perspective
  • For XAI topics related dynamic learning environments refer to DynXAI

Submissions with a focus on uncovered important issues related to XAI are particularly welcome, e.g. XAI for fairness checking approaches, XAI for privacy-preserving systems, XAI for federated learning, XAI for time series and graph based approaches, XAI for visualization, XAI in human-machine interaction, benchmarking of XAI methods, and XAI case studies.


Blog

None

Press

None