<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Anirban Basu</style></author><author><style face="normal" font="default" size="100%">Anna Monreale</style></author><author><style face="normal" font="default" size="100%">Roberto Trasarti</style></author><author><style face="normal" font="default" size="100%">Juan Camilo Corena</style></author><author><style face="normal" font="default" size="100%">Fosca Giannotti</style></author><author><style face="normal" font="default" size="100%">Dino Pedreschi</style></author><author><style face="normal" font="default" size="100%">Shinsaku Kiyomoto</style></author><author><style face="normal" font="default" size="100%">Yutaka Miyake</style></author><author><style face="normal" font="default" size="100%">Tadashi Yanagihara</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A risk model for privacy in trajectory data</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Trust Management</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year></dates><number><style face="normal" font="default" size="100%">1</style></number><volume><style face="normal" font="default" size="100%">2</style></volume><pages><style face="normal" font="default" size="100%">9</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Time sequence data relating to users, such as medical histories and mobility data, are good candidates for data mining, but often contain highly sensitive information. Different methods in privacy-preserving data publishing are utilised to release such private data so that individual records in the released data cannot be re-linked to specific users with a high degree of certainty. These methods provide theoretical worst-case privacy risks as measures of the privacy protection that they offer. However, often with many real-world data the worst-case scenario is too pessimistic and does not provide a realistic view of the privacy risks: the real probability of re-identification is often much lower than the theoretical worst-case risk. In this paper, we propose a novel empirical risk model for privacy which, in relation to the cost of privacy attacks, demonstrates better the practical risks associated with a privacy preserving data release. We show detailed evaluation of the proposed risk model by using k-anonymised real-world mobility data and then, we show how the empirical evaluation of the privacy risk has a different trend in synthetic data describing random movements.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Anirban Basu</style></author><author><style face="normal" font="default" size="100%">Juan Camilo Corena</style></author><author><style face="normal" font="default" size="100%">Anna Monreale</style></author><author><style face="normal" font="default" size="100%">Dino Pedreschi</style></author><author><style face="normal" font="default" size="100%">Fosca Giannotti</style></author><author><style face="normal" font="default" size="100%">Shinsaku Kiyomoto</style></author><author><style face="normal" font="default" size="100%">Vaidya, Jaideep</style></author><author><style face="normal" font="default" size="100%">Yutaka Miyake</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">CF-inspired Privacy-Preserving Prediction of Next Location in the Cloud</style></title><secondary-title><style face="normal" font="default" size="100%">Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.doi.org/10.1109/CloudCom.2014.114</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Mobility data gathered from location sensors such as Global Positioning System (GPS) enabled phones and vehicles is valuable for spatio-temporal data mining for various location-based services (LBS). Such data is often considered sensitive and there exist many a mechanism for privacy preserving analyses of the data. Through various anonymisation mechanisms, it can be ensured with a high probability that a particular individual cannot be identified when mobility data is outsourced to third parties for analysis. However, challenges remain with the privacy of the queries on outsourced analysis results, especially when the queries are sent directly to third parties by end-users. Drawing inspiration from our earlier work in privacy preserving collaborative filtering (CF) and next location prediction, in this exploratory work, we propose a novel representation of trajectory data in the CF domain and experiment with a privacy preserving Slope One CF predictor. We present evaluations for the accuracy and the computational performance of our proposal using anonymised data gathered from real traffic data in the Italian cities of Pisa and Milan. One use-case is a third-party location-prediction-as-a-service deployed on a public cloud, which can respond to privacy-preserving queries while enabling data owners to build a rich predictor on the cloud. </style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Anirban Basu</style></author><author><style face="normal" font="default" size="100%">Anna Monreale</style></author><author><style face="normal" font="default" size="100%">Juan Camilo Corena</style></author><author><style face="normal" font="default" size="100%">Fosca Giannotti</style></author><author><style face="normal" font="default" size="100%">Dino Pedreschi</style></author><author><style face="normal" font="default" size="100%">Shinsaku Kiyomoto</style></author><author><style face="normal" font="default" size="100%">Yutaka Miyake</style></author><author><style face="normal" font="default" size="100%">Tadashi Yanagihara</style></author><author><style face="normal" font="default" size="100%">Roberto Trasarti</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A Privacy Risk Model for Trajectory Data</style></title><secondary-title><style face="normal" font="default" size="100%">Trust Management {VIII} - 8th {IFIP} {WG} 11.11 International Conference, {IFIPTM} 2014, Singapore, July 7-10, 2014. Proceedings</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.doi.org/10.1007/978-3-662-43813-8_9</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">125–140</style></pages><abstract><style face="normal" font="default" size="100%">Time sequence data relating to users, such as medical histories and mobility data, are good candidates for data mining, but often contain highly sensitive information. Different methods in privacy-preserving data publishing are utilised to release such private data so that individual records in the released data cannot be re-linked to specific users with a high degree of certainty. These methods provide theoretical worst-case privacy risks as measures of the privacy protection that they offer. However, often with many real-world data the worst-case scenario is too pessimistic and does not provide a realistic view of the privacy risks: the real probability of re-identification is often much lower than the theoretical worst-case risk. In this paper we propose a novel empirical risk model for privacy which, in relation to the cost of privacy attacks, demonstrates better the practical risks associated with a privacy preserving data release. We show detailed evaluation of the proposed risk model by using k-anonymised real-world mobility data.</style></abstract></record></records></xml>