Jairo Giraldo (University of Utah), Alvaro Cardenas (UC Santa Cruz), Murat Kantarcioglu (UT Dallas), Jonathan Katz (George Mason University)

Differential Privacy has emerged in the last decade as a powerful tool to protect sensitive information. Similarly, the last decade has seen a growing interest in adversarial classification, where an attacker knows a classifier is trying to detect anomalies and the adversary attempts to design examples meant to mislead this classification.

Differential privacy and adversarial classification have been studied separately in the past. In this paper, we study the problem of how a strategic attacker can leverage differential privacy to inject false data in a system, and then we propose countermeasures against these novel attacks. We show the impact of our attacks and defenses in a real-world traffic estimation system and in a smart metering system.

View More Papers

Detecting Probe-resistant Proxies

Sergey Frolov (University of Colorado Boulder), Jack Wampler (University of Colorado Boulder), Eric Wustrow (University of Colorado Boulder)

Read More

HotFuzz: Discovering Algorithmic Denial-of-Service Vulnerabilities Through Guided Micro-Fuzzing

William Blair (Boston University), Andrea Mambretti (Northeastern University), Sajjad Arshad (Northeastern University), Michael Weissbacher (Northeastern University), William Robertson (Northeastern University), Engin Kirda (Northeastern University), Manuel Egele (Boston University)

Read More

DESENSITIZATION: Privacy-Aware and Attack-Preserving Crash Report

Ren Ding (Georgia Institute of Technology), Hong Hu (Georgia Institute of Technology), Wen Xu (Georgia Institute of Technology), Taesoo Kim (Georgia Institute of Technology)

Read More

HFL: Hybrid Fuzzing on the Linux Kernel

Kyungtae Kim (Purdue University), Dae R. Jeong (KAIST), Chung Hwan Kim (NEC Labs America), Yeongjin Jang (Oregon State University), Insik Shin (KAIST), Byoungyoung Lee (Seoul National University)

Read More