Heng Li (Huazhong University of Science and Technology), Zhiyuan Yao (Huazhong University of Science and Technology), Bang Wu (Huazhong University of Science and Technology), Cuiying Gao (Huazhong University of Science and Technology), Teng Xu (Huazhong University of Science and Technology), Wei Yuan (Huazhong University of Science and Technology), Xiapu Luo (The Hong Kong Polytechnic University)

Adversarial example techniques have been demonstrated to be highly effective against Android malware detection systems, enabling malware to evade detection with minimal code modifications. However, existing adversarial example techniques overlook the process of malware generation, thus restricting the applicability of adversarial example techniques. In this paper, we investigate piggybacked malware, a type of malware generated in bulk by piggybacking malicious code into popular apps, and combine it with adversarial example techniques. Given a malicious code segment (i.e., a rider), we can generate adversarial perturbations tailored to it and insert them into any carrier, enabling the resulting malware to evade detection. Through exploring the mechanism by which adversarial perturbation affects piggybacked malware code, we propose an adversarial piggybacked malware generation method, which comprises three modules: Malicious Rider Extraction, Adversarial Perturbation Generation, and Benign Carrier Selection. Extensive experiments have demonstrated that our method can efficiently generate a large volume of malware in a short period, and significantly increase the likelihood of evading detection. Our method achieved an average attack success rate (ASR) of 88.3% on machine learning-based detection models (e.g., Drebin and MaMaDroid), and an ASR of 76% and 92% on commercial engines Microsoft and Kingsoft, respectively. Furthermore, we have explored potential defenses against our adversarial piggybacked malware.

View More Papers

Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing

Ruyi Ding (Northeastern University), Tong Zhou (Northeastern University), Lili Su (Northeastern University), Aidong Adam Ding (Northeastern University), Xiaolin Xu (Northeastern University), Yunsi Fei (Northeastern University)

Read More

Provably Unlearnable Data Examples

Derui Wang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Bo Li (The University of Chicago), Seyit Camtepe (CSIRO's Data61), Liming Zhu (CSIRO's Data61)

Read More

Modeling End-User Affective Discomfort With Mobile App Permissions Across...

Yuxi Wu (Georgia Institute of Technology and Northeastern University), Jacob Logas (Georgia Institute of Technology), Devansh Ponda (Georgia Institute of Technology), Julia Haines (Google), Jiaming Li (Google), Jeffrey Nichols (Apple), W. Keith Edwards (Georgia Institute of Technology), Sauvik Das (Carnegie Mellon University)

Read More

MingledPie: A Cluster Mingling Approach for Mitigating Preference Profiling...

Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

Read More