Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

AnonPSI: An Anonymity Assessment Framework for PSI

Bo Jiang (TikTok Inc.), Jian Du (TikTok Inc.), Qiang Yan (TikTok Inc.)

Read More

IRRedicator: Pruning IRR with RPKI-Valid BGP Insights

Minhyeok Kang (Seoul National University), Weitong Li (Virginia Tech), Roland van Rijswijk-Deij (University of Twente), Ted "Taekyoung" Kwon (Seoul National University), Taejoong Chung (Virginia Tech)

Read More

BliMe: Verifiably Secure Outsourced Computation with Hardware-Enforced Taint Tracking

Hossam ElAtali (University of Waterloo), Lachlan J. Gunn (Aalto University), Hans Liljestrand (University of Waterloo), N. Asokan (University of Waterloo, Aalto University)

Read More

Eavesdropping on Controller Acoustic Emanation for Keystroke Inference Attack...

Shiqing Luo (George Mason University), Anh Nguyen (George Mason University), Hafsa Farooq (Georgia State University), Kun Sun (George Mason University), Zhisheng Yan (George Mason University)

Read More