In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. However, the effects of EU digital regulations usually transcend its confines. An example of what has been named the "Brussel effect" - the high impact of EU digital regulations around the world - is the General Data Protection Regulation (GDPR), which came into effect in May 2018 and rapidly became a world standard. The AIA seems to go in the same direction, having a clear extraterritorial scope, in that it applies to any AI system or service that has an impact on European Citizens, regardless of where its provider or user is located. The AIA adopts a risk-based approach that bans certain technologies, proposes strict regulations for "high risk" ones, and imposes stringent transparency criteria for others. If adopted, the AIA will undoubtedly have a significant impact in the EU and beyond. A crucial question is whether we already have the technology to comply with the proposed regulation and to what extent can the requirements of this regulation be enforceable.
This workshop aims at analyzing how this new regulation will shape the AI technologies of the future. Do we already have the technology to comply with the proposed regulation? How to operationalize the privacy, fairness, and explainability requirements of the AI Act? To what extent does the AI act protect individual rights? How can redress be accomplished? What are the best methods to perform a risk assessment of AI applications? Do we need to define new metrics for validating the goodness of an AI system in terms of privacy, fairness, and explainability? What methods to assess the quality of the datasets need to be created to be compliant with the current proposal for the AI regulation? How is it possible to deliver a process that effectively certificates AI? How will the proposed AI Act impact non-EU tech companies operating in the EU? Will this make the EU the leader of AI market regulation?