Apply to PhD-AI.it "AI & Society"​ @ Pisa - 3 positions funded by KDD Lab, 43 positions overall!

You are here

The new national PhD program in Artificial Intelligence is on the launchpad!

https://phd-ai.it/en/

Our KDD Lab - Knowledge Discovery & Data Mining Laboratory @ University of Pisa and ISTI-CNR of the National Research Council of Italy funds 3 positions at the "AI & Society" PhD.

Interested to join a vibrant research ecosystem on cutting edge topics around on human centered AI, XAI, social AI, AI ethics and Trustworthy AI?

Apply within 23 July h 13:00 CET (hard deadline)
Call for application here, with details on all 43 positions:

https://dottorato.unipi.it/index.php/en/application-process-for-the-acad...

and here

https://phd-ai.it/en/ai-society/

KDD Lab's funded position are on the following topics of human-centered AI, explainable AI (XAI) and social AI

1) Topic: "eXplanation of AI decision making as human-machine collaboration"
Venue: KDD Lab @ CNR, ISTI, Pisa
Contact person: Fosca Giannotti, fosca.giannotti@isti.cnr.it

We are evolving, faster than expected, from a time when humans are coding algorithms and carry responsibility of the resulting software quality and correctness, to a time when machines automatically learn algorithms from sufficiently many examples of the algorithms’ expected input/output behavior. It is dramatically urgent that machine learning and AI be explainable and comprehensible in human terms; this is instrumental for validating quality and correctness of the resulting systems, and also for aligning the algorithms with human values, as well as preserving human autonomy and awareness in decision making. The challenge is hard, as explanations should be sound and complete in statistical and causal terms, and yet comprehensible to multiple stakeholders that should be empowered to reason on explanations, to understand how the automated decision making system works on the basis of the inputs provided by the user; what are the most critical features; whether the system adopts latent features; how a specific decision is taken and on the basis of what rationale/reasons; how the user could get a better decision in the future. Focus of this theme is explanation as an interactive collaborative process among user and machine, stimulating reciprocally the exploration of several possible scenarios.

2) Topic: "Safe, Ethical & Effective AI": designing intelligent systems that are accountable and behave within an ethically minded framework
Venue: KDD Lab @ CNR, ISTI, Pisa
Contact person: Fosca Giannotti, fosca.giannotti@isti.cnr.it

Citizens continuously interact with software systems, e.g., by using a mobile device, in their smart homes, or from on board of (semi-autonomous) cars. This will happen more and more in the future, with the widespread use of AI technologies in the fabric of society that impacts on the social, economic, and political spheres. The European Union is seriously confronting the dangers represented by unauthorized disclosure and improper use of personal data, but there is a less evident but more serious risk attaining the core of the fundamental rights of the citizens. Worries about the growing of the data economy and the increasing presence of AI-enabled autonomous systems have shown that privacy concerns are insufficient: other ethical values and human dignity are at stake. To this aim, the Ethics Guidelines for Trustworthy AI of EU High-Level Expert Group on AI set the requirements that an AI system should satisfy: (i) respecting the rule of law; (ii) being aligned with agreed ethical principles and values, including privacy, fairness, human dignity; (iii) keeping us, the humans, in control; (iv) ensuring the system's behavior is transparent to us, and its decision making process is explainable; and (v) being robust and safe, that is the system's behavior remains trustworthy even if things go wrong. To address these principles, methods and tools to support the representation, evaluation, verification, and transparency of ethical deliberation by machines need to be investigated. The proposed theme is related to a wide variety of techniques that are aimed at designing intelligent systems with attention to their impact in society, capable of incorporating moral values. Theses on these topic may focus on one or more ethical principles by proactively and holistically inscribing in the design of AI technology values of privacy, fairness, and explainability; it is also designed to be extensible to other societal relevant ethical issues such as misinformation and polarization.

3) Topic: "Science and technology of interpretable machine learning and explanation of AI-assisted decision making"
Venue: KDD Lab @ Univ. Pisa
Contact person: Dino Pedreschi, dino.pedreschi@unipi.it

What new generation of AI-powered tools may improve human decision making in high-stakes domains, both at individual level and societal level? How AI can really boost human cognitive capacities for crucial applications in healthcare, justice, finance, job recruiting? How to cope with bias, inequality, polarization, depletion of common goods? How to tackle the problem not only at individual decision-making, but also for complex hybrid techno-social systems made of many interacting people, machines and algorithms?

The three positions are partially funded by the following projects led by KDD Lab:
The ERC Advanced Grant n. 834756 XAI "Science and Technology for the Explanation of AI Decision Making", principal investigator Fosca Giannotti
The European Network of Excellence "Human-centered Artificial Intelligence Humane AI Net", grant n. 952026, principal investigator of the Social AI line: Dino Pedreschi
The European Network of Excellence "Trustworthy AI - Integrating Learning, Optimisation and Reasoning TAILOR", grant n. 952215, principal investigator of the Trustworthy AI line: Fosca Giannotti

Check out also all other 38 positions offered at the Pisa branch of PhD-AI.it!

Related projects