In the era of Big Data person-specific data are increasingly collected, stored and analyzed by modern organizations. These data typically describe different dimensions of the daily social life and are the heart of a knowledge society, where the understanding of complex social phenomena is sustained by the knowledge extracted from the miners of big data across the various social dimensions by using mining technologies. However, the remarkable opportunities of discovering interesting knowledge from these data can be outweighed due to the high risks of (i) privacy violations, when uncontrolled intrusion into the personal data of the subjects occurs, and (ii) discrimination, when the discovered knowledge is unfairly used in making discriminatory decisions about the (possibly unaware) people who are classified, or profiled. Both privacy intrusion and discrimination jeopardize trust: if not adequately countered, they can undermine the idea of a fair and democratic knowledge society.
Big data analytics and fairness are not necessarily enemies. Sometimes many practical and impactful services based on big data analytics can be designed in such a way that the quality of results can coexist with discrimination and privacy protection.
The solution is the application of the privacy-by-design and discrimination-by-design principles.
This promising paradigm suggests to develop technological frameworks for countering the threats of undesirable, unlawful effects of privacy violation and discrimination, without obstructing the knowledge discovery opportunities of social mining and big data analytical technologies.
Study and design of methods for assessing privacy risks in data analytics.
Building frameworks to counter the threats of undesirable, unlawful effects of privacy violation, without obstructing the knowledge discovery opportunities.
Design algorithms for discovering discrimination in socially sensitive decision data and for enforcing fairness in data mining models.