<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Anirban Basu</style></author><author><style face="normal" font="default" size="100%">Anna Monreale</style></author><author><style face="normal" font="default" size="100%">Roberto Trasarti</style></author><author><style face="normal" font="default" size="100%">Juan Camilo Corena</style></author><author><style face="normal" font="default" size="100%">Fosca Giannotti</style></author><author><style face="normal" font="default" size="100%">Dino Pedreschi</style></author><author><style face="normal" font="default" size="100%">Shinsaku Kiyomoto</style></author><author><style face="normal" font="default" size="100%">Yutaka Miyake</style></author><author><style face="normal" font="default" size="100%">Tadashi Yanagihara</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A risk model for privacy in trajectory data</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Trust Management</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year></dates><number><style face="normal" font="default" size="100%">1</style></number><volume><style face="normal" font="default" size="100%">2</style></volume><pages><style face="normal" font="default" size="100%">9</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Time sequence data relating to users, such as medical histories and mobility data, are good candidates for data mining, but often contain highly sensitive information. Different methods in privacy-preserving data publishing are utilised to release such private data so that individual records in the released data cannot be re-linked to specific users with a high degree of certainty. These methods provide theoretical worst-case privacy risks as measures of the privacy protection that they offer. However, often with many real-world data the worst-case scenario is too pessimistic and does not provide a realistic view of the privacy risks: the real probability of re-identification is often much lower than the theoretical worst-case risk. In this paper, we propose a novel empirical risk model for privacy which, in relation to the cost of privacy attacks, demonstrates better the practical risks associated with a privacy preserving data release. We show detailed evaluation of the proposed risk model by using k-anonymised real-world mobility data and then, we show how the empirical evaluation of the privacy risk has a different trend in synthetic data describing random movements.</style></abstract></record></records></xml>