Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

50 Shades of Support: A Device-Centric Analysis of Android...

Abbas Acar (Florida International University), Güliz Seray Tuncay (Google), Esteban Luques (Florida International University), Harun Oz (Florida International University), Ahmet Aris (Florida International University), Selcuk Uluagac (Florida International University)

Read More

ActiveDaemon: Unconscious DNN Dormancy and Waking Up via User-specific...

Ge Ren (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Libo Chen (Shanghai Jiao Tong University), Kui Ren (Zhejiang University)

Read More

A Duty to Forget, a Right to be Assured?...

Hongsheng Hu (CSIRO's Data61), Shuo Wang (CSIRO's Data61), Jiamin Chang (University of New South Wales), Haonan Zhong (University of New South Wales), Ruoxi Sun (CSIRO's Data61), Shuang Hao (University of Texas at Dallas), Haojin Zhu (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61)

Read More