Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning

Gelei Deng, Yi Liu (Nanyang Technological University), Yuekang Li (The University of New South Wales), Wang Kailong(Huazhong University of Science and Technology), Tianwei Zhang, Yang Liu (Nanyang Technological University)

Read More

CP-IoT: A Cross-Platform Monitoring System for Smart Home

Hai Lin (Tsinghua University), Chenglong Li (Tsinghua University), Jiahai Yang (Tsinghua University), Zhiliang Wang (Tsinghua University), Linna Fan (National University of Defense Technology), Chenxin Duan (Tsinghua University)

Read More

From Hardware Fingerprint to Access Token: Enhancing the Authentication...

Yue Xiao (Wuhan University), Yi He (Tsinghua University), Xiaoli Zhang (Zhejiang University of Technology), Qian Wang (Wuhan University), Renjie Xie (Tsinghua University), Kun Sun (George Mason University), Ke Xu (Tsinghua University), Qi Li (Tsinghua University)

Read More