After obtaining my Ph.D. in Data Science, I am a post-hoc at the University of Pisa.
For my Ph.D. I got the opportunity to work with Prof. Fosca Giannotti and Prof. Anna Monreale's. For my Ph.D. thesis I worked on Ethical AI, with a particular interest in Data Privacy and Explainable AI. These two ethical values are essential: achieving them may enable the definition of a trustworthy ethical AI. However, the achievement of these ethical values has different requirements.
For this reason, there are both synergies and tensions in this context. During my Ph.D., I tackle these problems.
In the context of my project, I published EXPERT: a framework for predicting the privacy risk of a user's data, correlating the output with a local explanation. We developed this framework for tabular and sequential data. We exploit state-of-the-art machine learning models for privacy risk prediction, such as LSTMs, Rocket, InceptionTime and GCForest. For the local explanation, we exploit LIME, LORE and SHAP.
We highlighted some drawbacks from the results obtained by applying this kind of explanation method. For this reason, we decided to enhance LORE, by proposing a novel version, which achieves better results in terms of the most used metrics in this setting, such as fidelity, stability and faithfulness.
Regarding EXPERT and the privacy risk assessment, we also defined and empirically tested a new privacy attack for text data based on the psychometric profile extracted. This new privacy attack allows us to explore further the behavior of our framework EXPERT with different kinds of data and privacy attacks.
Then, we propose HOLDA: a new hierarchical federated learning approach for cross-silo, in which the goal is to maximize the generalization capabilities of the machine learning models trained.
Lastly, I also worked on a Survey of Explainable AI methods, with a benchmarking of Python's most popular XAI methods.
Currently, I'm involved in several European projects, such as EU H2020 project SoBigData++ and the TAILOR project, as well as XAI ERC.