Rui Zhu (Indiana University Bloominton), Di Tang (Indiana University Bloomington), Siyuan Tang (Indiana University Bloomington), Zihao Wang (Indiana University Bloomington), Guanhong Tao (Purdue University), Shiqing Ma (University of Massachusetts Amherst), XiaoFeng Wang (Indiana University Bloomington), Haixu Tang (Indiana University, Bloomington)

Most existing methods to detect backdoored machine learning (ML) models take one of the two approaches: trigger inversion (aka. reverse engineer) and weight analysis (aka. model diagnosis). In particular, the gradient-based trigger inversion is considered to be among the most effective backdoor detection techniques, as evidenced by the TrojAI competition, Trojan Detection Challenge and backdoorBench. However, little has been done to understand why this technique works so well and, more importantly, whether it raises the bar to the backdoor attack. In this paper, we report the first attempt to answer this question by analyzing the change rate of the backdoored model's output around its trigger-carrying inputs. Our study shows that existing attacks tend to inject the backdoor characterized by a low change rate around trigger-carrying inputs, which are easy to capture by gradient-based trigger inversion. In the meantime, we found that the low change rate is not necessary for a backdoor attack to succeed: we design a new attack enhancement method called Gradient Shaping (GRASP), which follows the opposite direction of adversarial training to reduce the change rate of a backdoored model with regard to the trigger, without undermining its backdoor effect. Also, we provide a theoretic analysis to explain the effectiveness of this new technique and the fundamental weakness of gradient-based trigger inversion. Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks designed to evade the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection.

View More Papers

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph...

Bang Wu (CSIRO's Data61/Monash University), He Zhang (Monash University), Xiangwen Yang (Monash University), Shuo Wang (CSIRO's Data61/Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Shirui Pan (Griffith University), Xingliang Yuan (Monash University)

Read More

BreakSPF: How Shared Infrastructures Magnify SPF Vulnerabilities Across the...

Chuhan Wang (Tsinghua University), Yasuhiro Kuranaga (Tsinghua University), Yihang Wang (Tsinghua University), Mingming Zhang (Zhongguancun Laboratory), Linkai Zheng (Tsinghua University), Xiang Li (Tsinghua University), Jianjun Chen (Tsinghua University; Zhongguancun Laboratory), Haixin Duan (Tsinghua University; Quan Cheng Lab; Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd)

Read More

A Security and Usability Analysis of Local Attacks Against...

Tarun Kumar Yadav (Brigham Young University), Kent Seamons (Brigham Young University)

Read More

Not your Type! Detecting Storage Collision Vulnerabilities in Ethereum...

Nicola Ruaro (University of California, Santa Barbara), Fabio Gritti (University of California, Santa Barbara), Robert McLaughlin (University of California, Santa Barbara), Ilya Grishchenko (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Read More