Jiayun Fu (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research Asia), Pingyi Hu (Huazhong University of Science and Technology), Ruixin Zhao (Huazhong University of Science and Technology), Yaru Jia (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Hai Jin (Huazhong University of Science and Technology), Dongmei Zhang (Microsoft Research)

Split learning is privacy-preserving distributed learning that has gained momentum recently. It also faces new security challenges. FSHA is a serious threat to split learning. In FSHA, a malicious server hijacks training to trick clients to train the encoder of an autoencoder instead of a classification model. Intermediate results sent to the server by a client are actually latent codes of private training samples, which can be reconstructed with high fidelity from the received codes with the decoder of the autoencoder. SplitGuard is the only existing effective defense against hijacking attacks. It is an active method that injects falsely labeled data to incur abnormal behaviors to detect hijacking attacks. Such injection also incurs an adverse impact on honest training of intended models.

In this paper, we first show that SplitGuard is vulnerable to an adaptive hijacking attack named SplitSpy. SplitSpy exploits the same property that SplitGuard exploits to detect hijacking attacks. In SplitSpy, a malicious server maintains a shadow model that performs the intended task to detect falsely labeled data and evade SplitGuard. Our experimental evaluation indicates that SplitSpy can effectively evade SplitGuard. Then we propose a novel passive detection method, named Gradients Scrutinizer, which relies on intrinsic differences between gradients from an intended model and those from a malicious model: the expected similarity among gradients of same-label samples differs from the expected similarity among gradients of different-label samples for an intended model, while they are the same for a malicious model. This intrinsic distinguishability enables Gradients Scrutinizer to effectively detect split-learning hijacking attacks without tampering with honest training of intended models. Our extensive evaluation indicates that Gradients Scrutinizer can effectively thwart both known split-learning hijacking attacks and adaptive counterattacks against it.

View More Papers

OBI: a multi-path oblivious RAM for forward-and-backward-secure searchable encryption

Zhiqiang Wu (Changsha University of Science and Technology), Rui Li (Dongguan University of Technology)

Read More

Smarter Contracts: Detecting Vulnerabilities in Smart Contracts with Deep...

Christoph Sendner (University of Wuerzburg), Huili Chen (University of California San Diego), Hossein Fereidooni (Technische Universität Darmstadt), Lukas Petzi (University of Wuerzburg), Jan König (University of Wuerzburg), Jasper Stang (University of Wuerzburg), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt), Farinaz Koushanfar (University of California San Diego)

Read More

Drone Security and the Mysterious Case of DJI's DroneID

Nico Schiller (Ruhr-Universität Bochum), Merlin Chlosta (CISPA Helmholtz Center for Information Security), Moritz Schloegel (Ruhr-Universität Bochum), Nils Bars (Ruhr University Bochum), Thorsten Eisenhofer (Ruhr University Bochum), Tobias Scharnowski (Ruhr-University Bochum), Felix Domke (Independent), Lea Schönherr (CISPA Helmholtz Center for Information Security), Thorsten Holz (CISPA Helmholtz Center for Information Security)

Read More

A Robust Counting Sketch for Data Plane Intrusion Detection

Sian Kim (Ewha Womans University), Changhun Jung (Ewha Womans University), RhongHo Jang (Wayne State University), David Mohaisen (University of Central Florida), DaeHun Nyang (Ewha Womans University)

Read More