Artificial Intelligence (AI) techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.
Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified.
The workshop solicits contributions including but not limited to the following topics in all areas of AI (supervised/unsupervised learning, information retrieval and recommender systems, HCI, constraint solving, complex systems and networks, etc.) and bridging interdisciplinary studies (law, social sciences).
- Bias and Fairness by Design
- Fairness measures
- Counterfactual reasoning
- Metric learning
- Impossibility results
- Multi-objective strategies for fairness, explainability, privacy, class-imbalancing, rare events, etc.
- Federated learning
- Resource allocation
- Personalized interventions
- Debiasing strategies on data, algorithms, procedures
- Human-in-the-loop approaches
- Methods to Audit, Measure, and Evaluate Bias and Fairness
- Auditing methods and tools
- Benchmarks and case studies
- Standard and best practices
- Explainability, traceability, data and model lineage
- Visual analytics and HCI for understanding/auditing bias and fairness
- HCI for bias and fairness
- Software engineering approaches