Eden Luzon (Ben Gurion University of the Negev), Guy Amit (Ben-Gurion University & IBM Research), Roy Weiss (Ben Gurion University of the Negev), Torsten Krauß (University of Würzburg), Alexandra Dmitrienko (University of Würzburg), Yisroel Mirsky (Ben Gurion University of the Negev)

Neural networks are often trained on proprietary datasets, making them attractive attack targets. We present a novel dataset extraction method leveraging an innovative training-time backdoor attack, allowing a malicious federated learning (FL) server to systematically and deterministically extract complete client training samples through a simple indexing process. Unlike prior techniques, our approach guarantees exact data recovery rather than probabilistic reconstructions or hallucinations, provides precise control over which samples are memorized and how many, and shows high capacity and robustness. Infected models output data samples when they receive a pattern-based index trigger, enabling systematic extraction of meaningful patches from each client’s local data without disrupting global model utility. To address small model output sizes, we extract patches and then recombined them.

The attack requires only a minor modification to the training code that can easily evade detection during client-side verification. Hence, this vulnerability represents a realistic FL supply-chain threat, where a malicious server can distribute modified training code to clients and later recover private data from their updates. Evaluations across classifiers, segmentation models, and large language models demonstrate that thousands of sensitive training samples can be recovered from client models with minimal impact on task performance, and a client's entire dataset can be stolen after multiple FL rounds. For instance, a medical segmentation dataset can be extracted with only a 3% utility drop. These findings expose a critical privacy vulnerability in FL systems, emphasizing the need for stronger integrity and transparency in distributed training pipelines.

View More Papers

CHAMELEOSCAN: Demystifying and Detecting iOS Chameleon Apps via LLM-Powered...

Hongyu Lin (Zhejiang University), Yicheng Hu (Zhejiang University), Haitao Xu (Zhejiang University), Yanchen Lu (Zhejiang University), Mengxia Ren (Zhejiang University), Shuai Hao (Old Dominion University), Chuan Yue (Colorado School of Mines), Zhao Li (Hangzhou Yugu Technology), Fan Zhang (Zhejiang University), Yixin Jiang (Electric Power Research Institute)

Read More

PANDORA: Lightweight Adversarial Defense for Edge IoT using Uncertainty-Aware...

Avinash Awasth (Malaviya National Institute of Technology Jaipur), Pritam Vediya (Malaviya National Institute of Technology Jaipur), Hemant Miranka (LNMIIT Jaipur), Ramesh Babu Battula (Malaviya National Institute of Technology Jaipur), Manoj Sigh Gaur (IIT Jammu)

Read More

Looma: A Low-Latency PQTLS Authentication Architecture for Cloud Applications

Xinshu Ma (University of Edinburgh), Michio Honda (University of Edinburgh)

Read More