The AI Act (AIA) is a landmark EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. Like the EU’s General Data Protection Regulation (GDPR), the AIA could become a global standard, determining to what extent AI can have an effect on our lives wherever we might be. The AI Act is already making waves internationally. In late September, Brazil’s Congress passed a bill that creates a legal framework for artificial intelligence. The AIA adopts a risk-based approach that bans certain technologies, proposes strict regulations for "high risk" ones, and imposes stringent transparency criteria for others. The first draft of the AIA has been highly criticized and several amendments have been proposed by several stakeholder groups, the main focus being on high-risk systems and obligations for developers of these systems. There seems like there is still a long way to go before the final text is ready for approval. A crucial question is to what extent can the requirements of this regulation be enforceable.
This workshop aims at analyzing how this new regulation will shape the AI technologies of the future. We will cover issues such as the ability of the AIA requirements to be operationalized, privacy, fairness, and explainability by design, individual rights and AIA, AI risk assessment, and much more. The workshop will bring together legal experts, tech experts and other interested stakeholders for constructive discussions. We aim at stakeholder and geographical balance. The workshop's main goal is to help the community understand and reason over the implications of an AI regulation, what problems does it solve, what problems does it not solve, what problems does it cause, discuss the new proposed amendments to the text of the AI Act, and propose new approaches that maybe have not been tackled yet.
Papers are welcome from academics, researchers, practitioners, postgraduate students, private sector, and anyone else with an interest in law and technology. Submissions with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between ML, AI, human-computer interaction, law, and ethics. Submitted applications can include regular papers, short papers, working papers and/or extended abstracts.
The workshop will be held in person. The venue is Design Offices Macherei (more information in the conference website).
The post-proceedings of IAIL 2023 the Imagining the AI Landscape After the AI ACT, in conjunction with HHAI2023, Munich, Germany, June 27, 2023 will be published on CEUR Workshop Proceedings.
Call for Papers
Do we already have the technology to comply with the proposed regulation? How to operationalize the privacy, fairness, and explainability requirements of the AI Act? To what extent does the AI act protect individual rights? How can redress be accomplished? What are the best methods to perform a risk assessment of AI applications? Do we need to define new metrics for validating the goodness of an AI system in terms of privacy, fairness, and explainability? What methods to assess the quality of the datasets need to be created to be compliant with the current proposal for the AI regulation? How is it possible to deliver a process that effectively certificates AI? How will the proposed AI Act impact non-EU tech companies operating in the EU? Will this make the EU the leader of AI market regulation?
If these questions form part of your research interest, we would be glad to hear from you.
Topics of interest include, but are not limited to:
- The AI Act and future technologies
- Applications of AI in the legal domain
- Ethical and legal issues of AI technology and its application
- Dataset quality evaluation
- AI and human oversight
- AI and human autonomy
- Accountability and Liability of AI
- Algorithmic bias, discrimination, and inequality
- Fairness by design
- AI and trust
- Transparent AI
- Explainable by design
- Explainability metrics and evaluation
- AI and human rights
- The impact of AI and automatic decision-making on rule of law
- Privacy by design
- AI risk assessment
- AI certification
- Safety, reliance and trust in human-AI interactions
- Human-in-the-loop paradigm
- Papers intended to foster discussion and exchange of ideas are welcome from academics, researchers, practitioners, postgraduate students, private sector, and anyone else with an interest in law and technology.
Submissions with an interdisciplinary orientation are particularly welcome, e.g. works at the boundary between machine learning, AI, human-computer interaction, law, digital philosopher, and ethics.
Copyright © 2014 - KDD Lab