09:30 - Ethical, Trustworthy, Interactive AI:
Explainability and Interactive AI
Fosca Giannotti Riccardo Guidotti Anna Monreale Salvatore Rinzivillo
Privacy in statistical databases is about finding tradeoffs to the tension between the increasing societal and economical demand for accurate information and the legal and ethical obligation to protect the privacy of individuals and enterprise which are the respondents providing the statistical data. In the case of statistical databases, the motivation for respondent privacy is one of survival: data collectors cannot expect to collect accurate information from individual or corporate respondents unless these feel the privacy of their responses is guaranteed.
In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications.
Pisa is the home of the first edition of the “AI & Society Summer School”, organized by the Italian National PHD program in Artificial Intelligence, PhD-AI.it. The Summer School is dedicated to the PhD students of the “AI & Society” branch of PhD-AI.it, and open to PhD students of the other branches. Five thrilling days of lectures, panel, poster sessions and proactive project work, to advance the frontier of AI research together with internationally renown scientists.
In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. However, the effects of EU digital regulations usually transcend its confines. An example of what has been named the "Brussel effect" - the high impact of EU digital regulations around the world - is the General Data Protection Regulation (GDPR), which came into effect in May 2018 and rapidly became a world standard.
The European Union’s vision of a digital ecosystem of trust focuses on fundamental values such as fairness, transparency, and accountability. Aside from the focus on innovation and novel technological solutions, this constitutes a relevant novelty in contrast with the corporate-regulated US model or the state-controlled technological landscape of China and Russia.
To celebrate Data Protection Day, the Legality Attentive Data Scientists, with the Brussels Privacy Hub explore one of the biggest open challenges of data protection law: explainability and accountability of Artificial Intelligence. The event, with high-level stakeholders and experts, address the twists and thorns of developing legality attentive Artificial Intelligence as a standard for our societies.
DataMod 2021 aims at bringing together practitioners and researchers from academia, industry and research institutions interested in the combined application of computational modelling methods with data-driven techniques from the areas of knowledge management, data mining and machine learning. Modelling methodologies of interest include automata, agents, Petri nets, process algebras and rewriting systems.
Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regu-lation (GDPR) emphasized the users' right to explanation when people face artificial intelligence-based technologies.
Leading international AI experts from civil society, academia, industry and governments, including ministerial-level delegates from GPAI’s membership, will come together on 11-12 November for GPAI’s annual event.
This important occasion offers GPAI experts and member governments to showcase recent developments from GPAI working groups in order to discuss how their collective efforts can be best harnessed to advance the responsible development and utilization of this technology.
Data science is an opportunity for boosting social progress, and data analysis tools are triggering new services with a clear impact on our daily life. SoBigData RI (http://www.sobigdata.eu) is a multi-disciplinary research infrastructure aimed at using social mining and big data to understand the complexity of our contemporary, globally-interconnected society.
Artificial Intelligence (AI) techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.
In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like for example credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has a big potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, reducing accountability.
During the past few years the increasing availability of large-scale datasets that capture activities in scientific publications, patents, grant proposals, sports, enterprises, as well as social media activities has created an unprecedented opportunity to explore patterns underlying success.