Lujia Shen (Zhejiang University), Yuwen Pu (Zhejiang University), Shouling Ji (Zhejiang University), Changjiang Li (Penn State), Xuhong Zhang (Zhejiang University), Chunpeng Ge (Shandong University), Ting Wang (Penn State)

Transformer-based models, such as BERT and GPT, have been widely adopted in natural language processing (NLP) due to their exceptional performance. However, recent studies show their vulnerability to textual adversarial attacks where the model's output can be misled by intentionally manipulating the text inputs. Despite various methods that have been proposed to enhance the model's robustness and mitigate this vulnerability, many require heavy consumption resources (e.g., adversarial training) or only provide limited protection (e.g., defensive dropout).

In this paper, we propose a novel method called dynamic attention, tailored for the transformer architecture, to enhance the inherent robustness of the model itself against various adversarial attacks. Our method requires no downstream task knowledge and does not incur additional costs. The proposed dynamic attention consists of two modules: *(i) attention rectification*, which masks or weakens the attention value of the chosen tokens, and *(ii) dynamic modeling*, which dynamically builds the set of candidate tokens. Extensive experiments demonstrate that dynamic attention significantly mitigates the impact of adversarial attacks, improving up to 33% compared to previous methods against widely used adversarial attacks. The model-level design of dynamic attention enables it to be easily combined with other defense methods (e.g., adversarial training) to further enhance the model's robustness. Furthermore, we demonstrate that dynamic attention preserves the state-of-the-art robustness space of the original model compared to other dynamic modeling methods.

View More Papers

Sharing cyber threat intelligence: Does it really help?

Beomjin Jin (Sungkyunkwan University), Eunsoo Kim (Sungkyunkwan University), Hyunwoo Lee (KENTECH), Elisa Bertino (Purdue University), Doowon Kim (University of Tennessee, Knoxville), Hyoungshick Kim (Sungkyunkwan University)

Read More

DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers...

Hanna Kim (KAIST), Jian Cui (Indiana University Bloomington), Eugene Jang (S2W Inc.), Chanhee Lee (S2W Inc.), Yongjae Lee (S2W Inc.), Jin-Woo Chung (S2W Inc.), Seungwon Shin (KAIST)

Read More

Investigating the Impact of Evasion Attacks Against Automotive Intrusion...

Paolo Cerracchio, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

Read More

When Cryptography Needs a Hand: Practical Post-Quantum Authentication for...

Geoff Twardokus (Rochester Institute of Technology), Nina Bindel (SandboxAQ), Hanif Rahbari (Rochester Institute of Technology), Sarah McCarthy (University of Waterloo)

Read More