Kostas Drakonakis (FORTH, Greece), Panagiotis Ilia (FORTH, Greece), Sotiris Ioannidis (FORTH, Greece), Jason Polakis (University of Illinois at Chicago, USA)

The exposure of location data constitutes a significant privacy risk to users as it can lead to de-anonymization, the inference of sensitive information, and even physical threats. In this paper we present LPAuditor, a tool that conducts a comprehensive evaluation of the privacy loss caused by public location metadata. First, we demonstrate how our system can pinpoint users’ key locations at an unprecedented granularity by identifying their actual postal addresses. Our evaluation on Twitter data highlights the effectiveness of our techniques which outperform prior approaches by 18.9%-91.6% for homes and 8.7%-21.8% for workplaces. Next we present a novel exploration of automated private information inference that uncovers “sensitive” locations that users have visited (pertaining to health, religion, and sex/nightlife). We find that location metadata can provide additional context to tweets and thus lead to the exposure of private information that might not match the users’ intentions.

We further explore the mismatch between user actions and information exposure and find that older versions of the official Twitter apps follow a privacy-invasive policy of including precise GPS coordinates in the metadata of tweets that users have geotagged at a coarse-grained level (e.g., city). The implications of this exposure are further exacerbated by our finding that users are considerably privacy-cautious in regards to exposing precise location data. When users can explicitly select what location data is published, there is a 94.6% reduction in tweets with GPS coordinates. As part of current efforts to give users more control over their data, LPAuditor can be adopted by major services and offered as an auditing tool that informs users about sensitive information they (indirectly) expose through location metadata.

View More Papers

Cracking the Wall of Confinement: Understanding and Analyzing Malicious...

Eihal Alowaisheq (Indiana University, King Saud University), Peng Wang (Indiana University), Sumayah Alrwais (King Saud University), Xiaojing Liao (Indiana University), XiaoFeng Wang (Indiana University), Tasneem Alowaisheq (Indiana University, King Saud University), Xianghang Mi (Indiana University), Siyuan Tang (Indiana University), Baojun Liu (Tsinghua University)

Read More

Practical Hidden Voice Attacks against Speech and Speaker Recognition...

Hadi Abdullah (University of Florida), Washington Garcia (University of Florida), Christian Peeters (University of Florida), Patrick Traynor (University of Florida), Kevin R. B. Butler (University of Florida), Joseph Wilson (University of Florida)

Read More

On the Challenges of Geographical Avoidance for Tor

Katharina Kohls (Ruhr-University Bochum), Kai Jansen (Ruhr-University Bochum), David Rupprecht (Ruhr-University Bochum), Thorsten Holz (Ruhr-University Bochum), Christina Pöpper (New York University Abu Dhabi)

Read More

Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic...

Lea Schönherr (Ruhr University Bochum), Katharina Kohls (Ruhr University Bochum), Steffen Zeiler (Ruhr University Bochum), Thorsten Holz (Ruhr University Bochum), Dorothea Kolossa (Ruhr University Bochum)

Read More