Ruixuan Liu (Emory University), Toan Tran (Emory University), Tianhao Wang (University of Virginia), Hongsheng Hu (Shanghai Jiao Tong University), Shuo Wang (Shanghai Jiao Tong University), Li Xiong (Emory University)

As large language models increasingly memorize web-scraped training content, they risk exposing copyrighted or private information. Existing protections require compliance from crawlers or model developers, fundamentally limiting their effectiveness. We propose ExpShield, a proactive self-guard that mitigates memorization while maintaining readability via invisible perturbations, and we formulate it as a constrained optimization problem. Due to the lack of an individual-level risk metric for natural text, we first propose instance exploitation, a metric that measures how much training on a specific text increases the chance of guessing that text from a set of candidates—with zero indicating perfect defense. Directly solving the problem is infeasible for defenders without sufficient knowledge, thus we develop two effective proxy solutions: single-level optimization and synthetic perturbation. To enhance the defense, we reveal and verify the memorization trigger hypothesis, which can help to identify key tokens for memorization. Leveraging this insight, we design targeted perturbations that (i) neutralize inherent trigger tokens to reduce memorization and (ii) introduce artificial trigger tokens to misdirect model memorization. Experiments validate our defense across attacks, model scales, and tasks in language and vision-to-language modeling. Even with privacy backdoor, the Membership Inference Attack (MIA) AUC drops from 0.95 to 0.55 under the defense, and the instance exploitation approaches zero. This suggests that compared to the ideal no-misuse scenario, the risk of exposing a text instance remains nearly unchanged despite its inclusion in the training data.

View More Papers

Trust Me, I Know This Function: Hijacking LLM Static...

Shir Bernstein (Ben Gurion University of the Negev), David Beste (CISPA Helmholtz Center for Information Security), Daniel Ayzenshteyn (Ben Gurion University of the Negev), Lea Schönherr (CISPA Helmholtz Center for Information Security), Yisroel Mirsky (Ben Gurion University of the Negev)

Read More

ropbot: Reimaging Code Reuse Attack Synthesis

Kyle Zeng (Arizona State University), Moritz Schloegel (CISPA Helmholtz Center for Information Security), Christopher Salls (UC Santa Barbara), Adam Doupé (Arizona State University), Ruoyu Wang (Arizona State University), Yan Shoshitaishvili (Arizona State University), Tiffany Bao (Arizona State University)

Read More

Ipotane: Balancing the Good and Bad Cases of Asynchronous...

Xiaohai Dai (Huazhong University of Science and Technology), Chaozheng Ding (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology), Julian Loss (CISPA Helmholtz Center for Information Security), Ling Ren (University of Illinois at Urbana-Champaign)

Read More