Meng Shen (Beijing Institute of Technology), Jiangyuan Bi (Beijing Institute of Technology), Hao Yu (National University of Defense Technology), Zhenming Bai (Beijing Institute of Technology), Wei Wang (Xi'an Jiaotong University), Liehuang Zhu (Beijing Institute of Technology)

Commercial DNN services have been developed in the form of machine learning as a service (MLaaS). To mitigate the potential threats of adversarial examples, various detection methods have been proposed. However, the existing methods usually require access to details or the training dataset of the target model, which is commonly unavailable in MLaaS scenarios. Their detection accuracy experiences a significant drop in a setting where neither the details nor the training dataset of the target model can be acquired.

In this paper, we propose Falcon, an adversarial example detection method offered by a third party, which achieves accuracy and efficiency simultaneously. Based on the disparity in noise tolerance between clean and adversarial examples, we explore constructive noise that cannot affect the model’s output labels when added to clean examples while causing noticeable changes in model outputs when added to adversarial examples. For each input, Falcon generates constructive noise with a specific distribution and intensity and achieves detection through differences in the output of the target model before and after adding constructive noise. Extensive experiments are conducted on 4 public datasets to evaluate the performance of Falcon in detecting 10 typical attacks. Falcon outperforms SOTA detection methods with the highest True Positive Rate (TPR) of adversarial examples and the lowest False Positive Rate (FPR) of clean examples. Furthermore, Falcon achieves a TPR of about 80% with an FPR of 5% on 6 well-known commercial DNN services, which outperforms the SOTA methods. Falcon can also maintain its accuracy although the adversary has complete knowledge of the detection details.

View More Papers

ANONYCALL: Enabling Native Private Calling in Mobile Networks

Hexuan Yu (Virginia Tech), Chaoyu Zhang (Virginia Tech), Yang Xiao (University of Kentucky), Angelos D. Keromytis (Georgia Institute of Technology), Y. Thomas Hou (Virginia Polytechnic Institute and State University), Wenjing Lou (Virginia Tech)

Read More

Unshaken by Weak Embedding: Robust Probabilistic Watermarking for Dataset...

Shang Wang (University of Technology Sydney), Tianqing Zhu (City University of Macau), Dayong Ye (City University of Macau), Hua Ma (Data61, CSIRO), Bo Liu (University of Technology Sydney), Ming Ding (Data61, CSIRO), Shengfang Zhai (National University of Singapore), Yansong Gao (School of Cyber Science and Engineering, Southeast University)

Read More

Know Me by My Pulse: Toward Practical Continuous Authentication...

Wei Shao (University of California, Davis), Zequan Liang (University of California Davis), Ruoyu Zhang (University of California, Davis), Ruijie Fang (University of California, Davis), Ning Miao (University of California, Davis), Ehsan Kourkchi (University of California - Davis), Setareh Rafatirad (University of California, Davis), Houman Homayoun (University of California Davis), Chongzhou Fang (Rochester Institute of Technology)

Read More