Jairo Giraldo (University of Utah), Alvaro Cardenas (UC Santa Cruz), Murat Kantarcioglu (UT Dallas), Jonathan Katz (George Mason University)

Differential Privacy has emerged in the last decade as a powerful tool to protect sensitive information. Similarly, the last decade has seen a growing interest in adversarial classification, where an attacker knows a classifier is trying to detect anomalies and the adversary attempts to design examples meant to mislead this classification.

Differential privacy and adversarial classification have been studied separately in the past. In this paper, we study the problem of how a strategic attacker can leverage differential privacy to inject false data in a system, and then we propose countermeasures against these novel attacks. We show the impact of our attacks and defenses in a real-world traffic estimation system and in a smart metering system.

View More Papers

Towards Plausible Graph Anonymization

Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (armasuisse Science and Technology), Bartlomiej Surma (CISPA Helmholtz Center for Information Security), Praveen Manoharan (CISPA Helmholtz Center for Information Security), Jilles Vreeken (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More

Hold the Door! Fingerprinting Your Car Key to Prevent...

Kyungho Joo (Korea University), Wonsuk Choi (Korea University), Dong Hoon Lee (Korea University)

Read More

FUSE: Finding File Upload Bugs via Penetration Testing

Taekjin Lee (KAIST, ETRI), Seongil Wi (KAIST), Suyoung Lee (KAIST), Sooel Son (KAIST)

Read More

OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-layer Log Analysis

Wajih Ul Hassan (University of Illinois Urbana-Champaign), Mohammad A. Noureddine (University of Illinois Urbana-Champaign), Pubali Datta (University of Illinois Urbana-Champaign), Adam Bates (University of Illinois Urbana-Champaign)

Read More