Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Deep Neural Networks (DNN) are vulnerable to adversarial samples that
are generated by perturbing correctly classified inputs to cause DNN
models to misbehave (e.g., misclassification). This can potentially
lead to disastrous consequences especially in security-sensitive
applications. Existing defense and detection techniques work well for
specific attacks under various assumptions (e.g., the set of possible
attacks are known beforehand). However, they are not sufficiently
general to protect against a broader range of attacks. In this paper,
we analyze the internals of DNN models under various attacks and
identify two common exploitation channels: the provenance channel and
the activation value distribution channel. We then propose a novel
technique to extract DNN invariants and use them to perform runtime
adversarial sample detection. Our experimental results of 11 different
kinds of attacks on popular datasets including ImageNet and 13 models
show that our technique can effectively detect all these attacks
(over 90% accuracy) with limited false positives. We also compare it
with three state-of-the-art techniques including the Local Intrinsic
Dimensionality (LID) based method, denoiser based methods (i.e.,
MagNet and HGD), and the prediction inconsistency based approach
(i.e., feature squeezing). Our experiments show promising results.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 34 ) ) ) [post__not_in] => Array ( [0] => 4567 ) )

Measuring the Facebook Advertising Ecosystem

Athanasios Andreou (EURECOM), Márcio Silva (UFMG), Fabrício Benevenuto (UFMG), Oana Goga (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG), Patrick Loiseau (Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG & MPI-SWS), Alan Mislove (Northeastern University)

Read More

Unveiling your keystrokes: A Cache-based Side-channel Attack on Graphics...

Daimeng Wang (University of California Riverside), Ajaya Neupane (University of California Riverside), Zhiyun Qian (University of California Riverside), Nael Abu-Ghazaleh (University of California Riverside), Srikanth V. Krishnamurthy (University of California Riverside), Edward J. M. Colbert (Virginia Tech), Paul Yu (U.S. Army Research Lab (ARL))

Read More

Profit: Detecting and Quantifying Side Channels in Networked Applications

Nicolás Rosner (University of California, Santa Barbara), Ismet Burak Kadron (University of California, Santa Barbara), Lucas Bang (Harvey Mudd College), Tevfik Bultan (University of California, Santa Barbara)

Read More

REDQUEEN: Fuzzing with Input-to-State Correspondence

Cornelius Aschermann (Ruhr-Universität Bochum), Sergej Schumilo (Ruhr-Universität Bochum), Tim Blazytko (Ruhr-Universität Bochum), Robert Gawlik (Ruhr-Universität Bochum), Thorsten Holz (Ruhr-Universität Bochum)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)