Xiaokuan Zhang (The Ohio State University), Jihun Hamm (The Ohio State University), Michael K. Reiter (University of North Carolina at Chapel Hill), Yinqian Zhang (The Ohio State University)

Machine learning empowers traffic-analysis attacks that breach users' privacy from their encrypted traffic. Recent advances in deep learning drastically escalate such threats.
One prominent example demonstrated recently is a traffic-analysis attack against video streaming by using convolutional neural networks. In this paper, we explore the adaption of techniques previously used in the domains of adversarial machine learning and differential privacy to mitigate the machine-learning-powered analysis of streaming traffic.

Our findings are twofold. First, constructing adversarial samples effectively confounds an adversary with a predetermined classifier but is less effective when the adversary can adapt to the defense by using alternative classifiers or training the classifier with adversarial samples. Second, differential-privacy guarantees are very effective against such statistical-inference-based traffic analysis, while remaining agnostic to the machine learning classifiers used by the adversary. We propose two mechanisms for enforcing differential privacy for encrypted streaming traffic, and evaluate their security and utility. Our empirical implementation and evaluation suggest that the proposed statistical privacy approaches are promising solutions in the underlying scenarios.

View More Papers

Cybercriminal Minds: An investigative study of cryptocurrency abuses in...

Seunghyeon Lee (KAIST, S2W LAB Inc.), Changhoon Yoon (S2W LAB Inc.), Heedo Kang (KAIST), Yeonkeun Kim (KAIST), Yongdae Kim (KAIST), Dongsu Han (KAIST), Sooel Son (KAIST), Seungwon Shin (KAIST, S2W LAB Inc.)

Read More

TextBugger: Generating Adversarial Text Against Real-world Applications

Jinfeng Li (Zhejiang University), Shouling Ji (Zhejiang University), Tianyu Du (Zhejiang University), Bo Li (University of California, Berkeley), Ting Wang (Lehigh University)

Read More

DIAT: Data Integrity Attestation for Resilient Collaboration of Autonomous...

Tigist Abera (Technische Universität Darmstadt), Raad Bahmani (Technische Universität Darmstadt), Ferdinand Brasser (Technische Universität Darmstadt), Ahmad Ibrahim (Technische Universität Darmstadt), Ahmad-Reza Sadeghi (Technische Universität Darmstadt), Matthias Schunter (Intel Labs)

Read More

Balancing Image Privacy and Usability with Thumbnail-Preserving Encryption

Kimia Tajik (Oregon State University), Akshith Gunasekaran (Oregon State University), Rhea Dutta (Cornell University), Brandon Ellis (Oregon State University), Rakesh B. Bobba (Oregon State University), Mike Rosulek (Oregon State University), Charles V. Wright (Portland State University), Wu-Chi Feng (Portland State University)

Read More