Katharina Kohls (Ruhr-University Bochum), Kai Jansen (Ruhr-University Bochum), David Rupprecht (Ruhr-University Bochum), Thorsten Holz (Ruhr-University Bochum), Christina Pöpper (New York University Abu Dhabi)

Traffic-analysis attacks are a persisting threat for Tor users. When censors or law enforcement agencies try to identify users, they conduct traffic-confirmation attacks and monitor encrypted transmissions to extract metadata—in combination with routing attacks, these attacks become sufficiently powerful to de-anonymize users. While traffic-analysis attacks are hard to detect and expensive to counter in practice, geographical avoidance provides an option to reject circuits that might be routed through an untrusted area. Unfortunately, recently proposed solutions introduce severe security issues by imprudent design decisions.

In this paper, we approach geographical avoidance starting from a thorough assessment of its challenges. These challenges serve as the foundation for the design of an empirical avoidance concept that considers actual transmission characteristics for justified decisions. Furthermore, we address the problems of untrusted or intransparent ground truth information that hinder a reliable assessment of circuits. Taking these features into account, we conduct an empirical simulation study and compare the performance of our novel avoidance concept with existing
approaches. Our results show that we outperform existing systems by 22 % fewer rejected circuits, which reduces the collateral damage of overly restrictive avoidance decisions. In a second evaluation step, we extend our initial system concept and implement the prototype MultilateraTor. This prototype is the first to satisfy the requirements of a practical deployment, as it maintains Tor’s original level of security, provides reasonable performance, and overcomes the fundamental security flaws of existing systems.

View More Papers

Geo-locating Drivers: A Study of Sensitive Data Leakage in...

Qingchuan Zhao (The Ohio State University), Chaoshun Zuo (The Ohio State University), Giancarlo Pellegrino (CISPA, Saarland University; Stanford University), Zhiqiang Lin (The Ohio State University)

Read More

Cracking the Wall of Confinement: Understanding and Analyzing Malicious...

Eihal Alowaisheq (Indiana University, King Saud University), Peng Wang (Indiana University), Sumayah Alrwais (King Saud University), Xiaojing Liao (Indiana University), XiaoFeng Wang (Indiana University), Tasneem Alowaisheq (Indiana University, King Saud University), Xianghang Mi (Indiana University), Siyuan Tang (Indiana University), Baojun Liu (Tsinghua University)

Read More

Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic...

Lea Schönherr (Ruhr University Bochum), Katharina Kohls (Ruhr University Bochum), Steffen Zeiler (Ruhr University Bochum), Thorsten Holz (Ruhr University Bochum), Dorothea Kolossa (Ruhr University Bochum)

Read More

Stealthy Adversarial Perturbations Against Real-Time Video Classification Systems

Shasha Li (University of California Riverside), Ajaya Neupane (University of California Riverside), Sujoy Paul (University of California Riverside), Chengyu Song (University of California Riverside), Srikanth V. Krishnamurthy (University of California Riverside), Amit K. Roy Chowdhury (University of California Riverside), Ananthram Swami (United States Army Research Laboratory)

Read More