Naretto Francesca

You are here

I am a third-year Ph.D. student in Data Science, under Prof. Fosca Giannotti and Prof. Anna Monreale's supervision. I am working on eXplainable AI with the group of ERC XAI led by Prof. Fosca Giannotti ("Science and technology for the eXplanation of AI decision making"). I am also part of the EU H2020 project SoBigData++ and the TAILOR project.
I graduated in Computer Science (Bachelor's degree from the University of Turin, Master's degree from the University of Pisa). During my Master's, I won a scholarship to work on my Master's thesis abroad at the University College of London. The thesis topic was a framework for privacy risk prediction and explanations tailored for sequence data.
I received an award for my Master's thesis: ETIC Award 2019-2020 - District Award 2031 Rotary International. This award was given due to the promising results obtained in the context of ethical issues, including data privacy and the right to an explanation.
My primary research interest is Ethical AI, with a particular interest in Data Privacy and Explainable AI. These two ethical values are essential: achieving them may enable the definition of a trustworthy ethical AI. However, the achievement of these ethical values has different requirements. For this reason, there are both synergies and tensions in this context. During my Ph.D., I tackle these problems.

In the context of my project, I published EXPERT: a framework for predicting the privacy risk of a user's data, correlating the output with a local explanation. We developed this framework for tabular and sequential data. We exploit the state-of-the-art machine learning models for privacy risk prediction, such as LSTMs, Rocket, InceptionTime and GCForest. For the local explanation, we exploit LIME, LORE and SHAP.
In this context, we also defined and empirically tested a new privacy attack for text data based on the psychometric profile extracted. This new privacy attack allows us to explore further the behavior of our framework EXPERT with different kinds of data and privacy attacks.
Then, we propose HOLDA: a new hierarchical federated learning approach for cross-silo, in which the goal is to maximize the generalization capabilities of the machine learning models trained.
Lastly, I also worked on a Survey of Explainable AI methods, with a benchmarking of Python's most popular XAI methods.
During my Ph.D. I work as a teaching assistant for a course in Data Mining (for the Master Degree in Computer Science) and in programming with Python (a first programming course for the Master in Big Data and Data Science).
This year, I also conducted some seminars at Scuola Normale Superiore on Machine Learning techniques and Explainable AI.
During these courses, I teach mainly clustering techniques, machine learning techniques and eXplainable AI methods from a practical point of view.

Data Privacy, Privacy by design, Explainable AI, Federated learning
Workshop Chair for IAIL2022 (
PC member for KDD2022
Master in Computer Science at the University of Pisa (110)
Ph.D. Student
Scuola Normale Superiore
Google Scholar
Francesca Naretto on Google Scholar
Francesca Naretto on DBLP
Francesca Naretto on LinkedIn


Benchmarking and Survey of Explanation Methods for Black Box Models | AISC

Rapporteurs: Francesco Bodria, Francesca Naretto; Guest: Muhammad Rehman Zafar