Yanzuo Chen (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

Recent research has demonstrated the severity and prevalence of bit-flip attacks (BFAs; e.g., with Rowhammer techniques) on deep neural networks (DNNs). BFAs can manipulate DNN prediction and completely deplete DNN intelligence, and can be launched against both DNNs running on deep learning (DL) frameworks like PyTorch, as well as those compiled into standalone executables by DL compilers. While BFA defenses have been proposed for models on DL frameworks, we find them incapable of protecting DNN executables due to the new attack vectors on these executables.

This paper proposes the first defense against BFA for DNN executables. We first present a motivating study to demonstrate the fragility and unique attack surfaces of DNN executables. Specifically, attackers can flip bits in the `.text` section to alter the computation logic of DNN executables and consequently manipulate DNN predictions; previous defenses guarding model weights can also be easily evaded when implemented in DNN executables. Subsequently, we propose BitShield, a full-fledged defense that detects BFAs targeting both data and `.text` sections in DNN executables. We novelly model BFA on DNN executables as a process to corrupt their semantics, and base BitShield on semantic integrity checks. Moreover, by deliberately fusing code checksum routines into a DNN’s semantics, we make BitShield highly resilient against BFAs targeting itself. BitShield is integrated in a popular DL compiler (Amazon TVM) and is compatible with all existing compilation and optimization passes. Unlike prior defenses, BitShield is designed to protect more vulnerable full-precision DNNs and does not assume specific attack methods, exhibiting high generality. BitShield also proactively detects ongoing BFA attempts instead of passively hardening DNNs. Evaluations show that BitShield provides strong protection against BFAs (average mitigation rate 97.51%) with low performance overhead (2.47% on average) even when faced with fully white-box, powerful attackers.

View More Papers

BARBIE: Robust Backdoor Detection Based on Latent Separability

Hanlei Zhang (Zhejiang University), Yijie Bai (Zhejiang University), Yanjiao Chen (Zhejiang University), Zhongming Ma (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

FUZZUER: Enabling Fuzzing of UEFI Interfaces on EDK-2

Connor Glosner (Purdue University), Aravind Machiry (Purdue University)

Read More

Be Careful of What You Embed: Demystifying OLE Vulnerabilities

Yunpeng Tian (Huazhong University of Science and Technology), Feng Dong (Huazhong University of Science and Technology), Haoyi Liu (Huazhong University of Science and Technology), Meng Xu (University of Waterloo), Zhiniang Peng (Huazhong University of Science and Technology; Sangfor Technologies Inc.), Zesen Ye (Sangfor Technologies Inc.), Shenghui Li (Huazhong University of Science and Technology), Xiapu Luo…

Read More

The Road to Trust: Building Enclaves within Confidential VMs

Wenhao Wang (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Linke Song (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Benshan Mei (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Shuang Liu (Ant Group), Shijun Zhao (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering,…

Read More