Zhibo Jin (The University of Sydney), Jiayu Zhang (Suzhou Yierqi), Zhiyu Zhu, Huaming Chen (The University of Sydney)

The robustness of deep learning models against adversarial attacks remains a pivotal concern. This study presents, for the first time, an exhaustive review of the transferability aspect of adversarial attacks. It systematically categorizes and critically evaluates various methodologies developed to augment the transferability of adversarial attacks. This study encompasses a spectrum of techniques, including Generative Structure, Semantic Similarity, Gradient Editing, Target Modification, and Ensemble Approach. Concurrently, this paper introduces a benchmark framework TAA-Bench, integrating ten leading methodologies for adversarial attack transferability, thereby providing a standardized and systematic platform for comparative analysis across diverse model architectures. Through comprehensive scrutiny, we delineate the efficacy and constraints of each method, shedding light on their underlying operational principles and practical utility. This review endeavors to be a quintessential resource for both scholars and practitioners in the field, charting the complex terrain of adversarial transferability and setting a foundation for future explorations in this vital sector. The associated codebase is accessible at: https://github.com/KxPlaug/TAA-Bench

View More Papers

Modeling and Detecting Internet Censorship Events

Elisa Tsai (University of Michigan), Ram Sundara Raman (University of Michigan), Atul Prakash (University of Michigan), Roya Ensafi (University of Michigan)

Read More

Low-Quality Training Data Only? A Robust Framework for Detecting...

Yuqi Qing (Tsinghua University), Qilei Yin (Zhongguancun Laboratory), Xinhao Deng (Tsinghua University), Yihao Chen (Tsinghua University), Zhuotao Liu (Tsinghua University), Kun Sun (George Mason University), Ke Xu (Tsinghua University), Jia Zhang (Tsinghua University), Qi Li (Tsinghua University)

Read More

Phoenix: Surviving Unpatched Vulnerabilities via Accurate and Efficient Filtering...

Hugo Kermabon-Bobinnec (Concordia University), Yosr Jarraya (Ericsson Security Research), Lingyu Wang (Concordia University), Suryadipta Majumdar (Concordia University), Makan Pourzandi (Ericsson Security Research)

Read More

DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers...

Hanna Kim (KAIST), Jian Cui (Indiana University Bloomington), Eugene Jang (S2W Inc.), Chanhee Lee (S2W Inc.), Yongjae Lee (S2W Inc.), Jin-Woo Chung (S2W Inc.), Seungwon Shin (KAIST)

Read More