Tian Dong (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Guoxing Chen (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Yan Meng (Shanghai Jiao Tong University), Shaofeng Li (Southeast University), Zhen Liu (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University)

Open-source Large Language Models (LLMs) have recently gained popularity because of their comparable performance to proprietary LLMs. To efficiently fulfill domain-specialized tasks, open-source LLMs can be refined, without expensive accelerators, using low-rank adapters. However, it is still unknown whether low-rank adapters can be exploited to control LLMs. To address this gap, we demonstrate that an infected adapter can induce, on specific triggers, an LLM to output content defined by an adversary and to even maliciously use tools. To train a Trojan adapter, we propose two novel attacks, POLISHED and FUSION, that improve over prior approaches. POLISHED uses a superior LLM to align naïvely poisoned data based on our insight that it can better inject poisoning knowledge during training. In contrast, FUSION leverages a novel over-poisoning procedure to transform a benign adapter into a malicious one by magnifying the attention between trigger and target in model weights. In our experiments, we first conduct two case studies to demonstrate that a compromised LLM agent can use malware to control the system (e.g., a LLM-driven robot) or to launch a spear-phishing attack. Then, in terms of targeted misinformation, we show that our attacks provide higher attack effectiveness than the existing baseline and, for the purpose of attracting downloads, preserve or improve the adapter’s utility. Finally, we designed and evaluated three potential defenses. However, none proved entirely effective in safeguarding against our attacks, highlighting the need for more robust defenses supporting a secure LLM supply chain.

View More Papers

Towards Understanding Unsafe Video Generation

Yan Pang (University of Virginia), Aiping Xiong (Penn State University), Yang Zhang (CISPA Helmholtz Center for Information Security), Tianhao Wang (University of Virginia)

Read More

Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A...

Ningfei Wang (University of California, Irvine), Shaoyuan Xie (University of California, Irvine), Takami Sato (University of California, Irvine), Yunpeng Luo (University of California, Irvine), Kaidi Xu (Drexel University), Qi Alfred Chen (University of California, Irvine)

Read More

Attributing Open-Source Contributions is Critical but Difficult: A Systematic...

Jan-Ulrich Holtgrave (CISPA Helmholtz Center for Information Security), Kay Friedrich (CISPA Helmholtz Center for Information Security), Fabian Fischer (CISPA Helmholtz Center for Information Security), Nicolas Huaman (Leibniz University Hannover), Niklas Busch (CISPA Helmholtz Center for Information Security), Jan H. Klemmer (CISPA Helmholtz Center for Information Security), Marcel Fourné (Paderborn University), Oliver Wiese (CISPA Helmholtz Center…

Read More

ICSQuartz: Scan Cycle-Aware and Vendor-Agnostic Fuzzing for Industrial Control...

Corban Villa (New York University Abu Dhabi), Constantine Doumanidis (New York University Abu Dhabi), Hithem Lamri (New York University Abu Dhabi), Prashant Hari Narayan Rajput (InterSystems), Michail Maniatakos (New York University Abu Dhabi)

Read More