<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Anirban Basu</style></author><author><style face="normal" font="default" size="100%">Anna Monreale</style></author><author><style face="normal" font="default" size="100%">Juan Camilo Corena</style></author><author><style face="normal" font="default" size="100%">Fosca Giannotti</style></author><author><style face="normal" font="default" size="100%">Dino Pedreschi</style></author><author><style face="normal" font="default" size="100%">Shinsaku Kiyomoto</style></author><author><style face="normal" font="default" size="100%">Yutaka Miyake</style></author><author><style face="normal" font="default" size="100%">Tadashi Yanagihara</style></author><author><style face="normal" font="default" size="100%">Roberto Trasarti</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A Privacy Risk Model for Trajectory Data</style></title><secondary-title><style face="normal" font="default" size="100%">Trust Management {VIII} - 8th {IFIP} {WG} 11.11 International Conference, {IFIPTM} 2014, Singapore, July 7-10, 2014. Proceedings</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.doi.org/10.1007/978-3-662-43813-8_9</style></url></web-urls></urls><pages><style face="normal" font="default" size="100%">125–140</style></pages><abstract><style face="normal" font="default" size="100%">Time sequence data relating to users, such as medical histories and mobility data, are good candidates for data mining, but often contain highly sensitive information. Different methods in privacy-preserving data publishing are utilised to release such private data so that individual records in the released data cannot be re-linked to specific users with a high degree of certainty. These methods provide theoretical worst-case privacy risks as measures of the privacy protection that they offer. However, often with many real-world data the worst-case scenario is too pessimistic and does not provide a realistic view of the privacy risks: the real probability of re-identification is often much lower than the theoretical worst-case risk. In this paper we propose a novel empirical risk model for privacy which, in relation to the cost of privacy attacks, demonstrates better the practical risks associated with a privacy preserving data release. We show detailed evaluation of the proposed risk model by using k-anonymised real-world mobility data.</style></abstract></record></records></xml>