By offering a large number of highly diverse resources, learning platforms have been attracting lots of participants, and the interactions with these systems have generated a vast amount of learning-related data. Their collection, processing and analysis have promoted a significant growth of machine learning and knowledge discovery approaches and have opened up new opportunities for supporting and assessing educational experiences in a data-driven fashion.
In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications.
The World Conference on Explainable Artificial Intelligence (XAI 2023) is an annual event that aims to bring together researchers, academics, and professionals, promoting the sharing and discussion of knowledge, new perspectives, experiences, and innovations in the field of eXplainable Artificial Intelligence (XAI).
The AI Act (AIA) is a landmark EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. Like the EU’s General Data Protection Regulation (GDPR), the AIA could become a global standard, determining to what extent AI can have an effect on our lives wherever we might be. The AI Act is already making waves internationally. In late September, Brazil’s Congress passed a bill that creates a legal framework for artificial intelligence.
Ital-IA è il terzo Convegno Nazionale CINI sull'Intelligenza Artificiale, organizzato per sviluppare obiettivi comuni tra istituzioni pubbliche, industria italiana e la ricerca scientifica delle università e dei centri di ricerca nazionali. Ital-IA ha l'ambizione di "fare rete nazionale" tra tutte le azioni che si stanno disegnando in questi mesi in Italia per cogliere le potenzialità di sviluppo legate alle tecnologie dell'Intelligenza Artificiale.
09:30 - Ethical, Trustworthy, Interactive AI:
Explainability and Interactive AI
Fosca Giannotti Riccardo Guidotti Anna Monreale Salvatore Rinzivillo
Eleonora Cappuccio
Francesco Spinnato
Francesco Bodria
Mattia Setzu
Andrea Beretta
Privacy in statistical databases is about finding tradeoffs to the tension between the increasing societal and economical demand for accurate information and the legal and ethical obligation to protect the privacy of individuals and enterprise which are the respondents providing the statistical data. In the case of statistical databases, the motivation for respondent privacy is one of survival: data collectors cannot expect to collect accurate information from individual or corporate respondents unless these feel the privacy of their responses is guaranteed.
In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications.
Pisa is the home of the first edition of the “AI & Society Summer School”, organized by the Italian National PHD program in Artificial Intelligence, PhD-AI.it. The Summer School is dedicated to the PhD students of the “AI & Society” branch of PhD-AI.it, and open to PhD students of the other branches. Five thrilling days of lectures, panel, poster sessions and proactive project work, to advance the frontier of AI research together with internationally renown scientists.
In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. However, the effects of EU digital regulations usually transcend its confines. An example of what has been named the "Brussel effect" - the high impact of EU digital regulations around the world - is the General Data Protection Regulation (GDPR), which came into effect in May 2018 and rapidly became a world standard.
The European Union’s vision of a digital ecosystem of trust focuses on fundamental values such as fairness, transparency, and accountability. Aside from the focus on innovation and novel technological solutions, this constitutes a relevant novelty in contrast with the corporate-regulated US model or the state-controlled technological landscape of China and Russia.
To celebrate Data Protection Day, the Legality Attentive Data Scientists, with the Brussels Privacy Hub explore one of the biggest open challenges of data protection law: explainability and accountability of Artificial Intelligence. The event, with high-level stakeholders and experts, address the twists and thorns of developing legality attentive Artificial Intelligence as a standard for our societies.
DataMod 2021 aims at bringing together practitioners and researchers from academia, industry and research institutions interested in the combined application of computational modelling methods with data-driven techniques from the areas of knowledge management, data mining and machine learning. Modelling methodologies of interest include automata, agents, Petri nets, process algebras and rewriting systems.
Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regu-lation (GDPR) emphasized the users' right to explanation when people face artificial intelligence-based technologies.
Leading international AI experts from civil society, academia, industry and governments, including ministerial-level delegates from GPAI’s membership, will come together on 11-12 November for GPAI’s annual event.
This important occasion offers GPAI experts and member governments to showcase recent developments from GPAI working groups in order to discuss how their collective efforts can be best harnessed to advance the responsible development and utilization of this technology.