Yijun Yang (The Chinese University of Hong Kong), Ruiyuan Gao (The Chinese University of Hong Kong), Yu Li (The Chinese University of Hong Kong), Qiuxia Lai (Communication University of China), Qiang Xu (The Chinese University of Hong Kong)

Adversarial examples (AEs) pose severe threats to the applications of deep neural networks (DNNs) to safety-critical domains (e.g., autonomous driving and healthcare analytics). While there has been a vast body of research to defend against AEs, to the best of our knowledge, they all suffer from some weaknesses, e.g., defending against only a subset of AEs or causing a relatively high accuracy loss for legitimate inputs. Moreover, most of the existing solutions cannot defend against adaptive attacks, wherein attackers are knowledgeable about the defense mechanisms and craft AEs accordingly.

In this paper, we propose a novel AE detection framework based on the very nature of AEs, i.e., their semantic information is inconsistent with the discriminative features extracted by the target DNN model. To be specific, the proposed solution, namely ContraNet, models such contradiction by first taking both the input and the inference result to a generator to obtain a synthetic output and then comparing it against the original input. For legitimate inputs that are correctly inferred, the synthetic output tries to reconstruct the input. On the contrary, for AEs, instead of reconstructing the input, the synthetic output would be created to conform to the wrong label whenever possible. Consequently, by measuring the distance between the input and the synthetic output with metric learning, we can differentiate AEs from legitimate inputs. We perform comprehensive evaluations under various types of AE attack scenarios, and experimental results show that ContraNet outperforms existing solutions by a large margin, especially for adaptive attacks. Moreover, further analysis shows that successful AEs that can bypass ContraNet tend to have much-weakened adversarial semantics. We have also shown that ContraNet can be easily combined with adversarial training techniques to achieve more outstanding AE defense capabilities.

View More Papers

Chunked-Cache: On-Demand and Scalable Cache Isolation for Security Architectures

Ghada Dessouky (Technical University of Darmstadt), Emmanuel Stapf (Technical University of Darmstadt), Pouya Mahmoody (Technical University of Darmstadt), Alexander Gruler (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Interpretable Federated Transformer Log Learning for Cloud Threat Forensics

Gonzalo De La Torre Parra (University of the Incarnate Word, TX, USA), Luis Selvera (Secure AI and Autonomy Lab, The University of Texas at San Antonio, TX, USA), Joseph Khoury (The Cyber Center For Security and Analytics, University of Texas at San Antonio, TX, USA), Hector Irizarry (Raytheon, USA), Elias Bou-Harb (The Cyber Center For…

Read More

Let’s Authenticate: Automated Certificates for User Authentication

James Conners (Brigham Young University), Corey Devenport (Brigham Young University), Stephen Derbidge (Brigham Young University), Natalie Farnsworth (Brigham Young University), Kyler Gates (Brigham Young University), Stephen Lambert (Brigham Young University), Christopher McClain (Brigham Young University), Parker Nichols (Brigham Young University), Daniel Zappala (Brigham Young University)

Read More

Demo #3: I Am Not Afraid of the GPS...

Ali A. Abdallah (UC Irvine), Zaher M. Kassas (UC Irvine) and Chiawei Lee (US Air Force Test Pilot School)

Read More