Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Deep Neural Networks (DNN) are vulnerable to adversarial samples that
are generated by perturbing correctly classified inputs to cause DNN
models to misbehave (e.g., misclassification). This can potentially
lead to disastrous consequences especially in security-sensitive
applications. Existing defense and detection techniques work well for
specific attacks under various assumptions (e.g., the set of possible
attacks are known beforehand). However, they are not sufficiently
general to protect against a broader range of attacks. In this paper,
we analyze the internals of DNN models under various attacks and
identify two common exploitation channels: the provenance channel and
the activation value distribution channel. We then propose a novel
technique to extract DNN invariants and use them to perform runtime
adversarial sample detection. Our experimental results of 11 different
kinds of attacks on popular datasets including ImageNet and 13 models
show that our technique can effectively detect all these attacks
(over 90% accuracy) with limited false positives. We also compare it
with three state-of-the-art techniques including the Local Intrinsic
Dimensionality (LID) based method, denoiser based methods (i.e.,
MagNet and HGD), and the prediction inconsistency based approach
(i.e., feature squeezing). Our experiments show promising results.

View More Papers

NoDoze: Combatting Threat Alert Fatigue with Automated Provenance Triage

Wajih Ul Hassan (NEC Laboratories America, Inc.; University of Illinois at Urbana–Champaign), Shengjian Guo (Virginia Tech), Ding Li (NEC Laboratories America, Inc.), Zhengzhang Chen (NEC Laboratories America, Inc.), Kangkook Jee (NEC Laboratories America, Inc.), Zhichun Li (NEC Laboratories America, Inc.), Adam Bates (University of Illinois at Urbana–Champaign)

Read More

Quantity vs. Quality: Evaluating User Interest Profiles Using Ad...

Muhammad Ahmad Bashir (Northeastern University), Umar Farooq (LUMS Pakistan), Maryam Shahid (LUMS Pakistan), Muhammad Fareed Zaffar (LUMS Pakistan), Christo Wilson (Northeastern University)

Read More

PeriScope: An Effective Probing and Fuzzing Framework for the...

Dokyung Song (University of California, Irvine), Felicitas Hetzelt (Technical University of Berlin), Dipanjan Das (University of California, Santa Barbara), Chad Spensky (University of California, Santa Barbara), Yeoul Na (University of California, Irvine), Stijn Volckaert (University of California, Irvine and KU Leuven), Giovanni Vigna (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara),…

Read More

DIAT: Data Integrity Attestation for Resilient Collaboration of Autonomous...

Tigist Abera (Technische Universität Darmstadt), Raad Bahmani (Technische Universität Darmstadt), Ferdinand Brasser (Technische Universität Darmstadt), Ahmad Ibrahim (Technische Universität Darmstadt), Ahmad-Reza Sadeghi (Technische Universität Darmstadt), Matthias Schunter (Intel Labs)

Read More