Xuanqi Liu (Tsinghua University), Zhuotao Liu (Tsinghua University), Qi Li (Tsinghua University), Ke Xu (Tsinghua University), Mingwei Xu (Tsinghua University)

The escalating focus on data privacy poses significant challenges for collaborative neural network training, where data ownership and model training/deployment responsibilities reside with distinct entities. Our community has made substantial contributions to addressing this challenge, proposing various approaches such as federated learning (FL) and privacy-preserving machine learning based on cryptographic constructs like homomorphic encryption (HE) and secure multiparty computation (MPC). However, FL completely overlooks model privacy, and HE has limited extensibility (confined to only one data provider). While the state-of-the-art MPC frameworks provide reasonable throughput and simultaneously ensure model/data privacy, they rely on a critical non-colluding assumption on the computing servers, and relaxing this assumption is still an open problem.

In this paper, we present Pencil, the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers, without relying on the non-colluding assumption. Our fundamental design principle is to construct the n-party collaborative training protocol based on an efficient two-party protocol, and meanwhile ensuring that switching to different data providers during model training introduces no extra cost. We introduce several novel cryptographic protocols to realize this design principle and conduct a rigorous security and privacy analysis. Our comprehensive evaluations of Pencil demonstrate that (i) models trained in plaintext and models trained privately using Pencil exhibit nearly identical test accuracies; (ii) The training overhead of Pencil is greatly reduced: Pencil achieves 10 ∼ 260× higher throughput and 2 orders of magnitude less communication than prior art; (iii) Pencil is resilient against both existing and adaptive (white-box) attacks.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 104 ) ) ) [post__not_in] => Array ( [0] => 16912 ) )

On Precisely Detecting Censorship Circumvention in Real-World Networks

Ryan Wails (Georgetown University, U.S. Naval Research Laboratory), George Arnold Sullivan (University of California, San Diego), Micah Sherr (Georgetown University), Rob Jansen (U.S. Naval Research Laboratory)

Read More

LARMix: Latency-Aware Routing in Mix Networks

Mahdi Rahimi (KU Leuven), Piyush Kumar Sharma (KU Leuven), Claudia Diaz (KU Leuven)

Read More

HistCAN: A real-time CAN IDS with enhanced historical traffic...

Shuguo Zhuo, Nuo Li, Kui Ren (The State Key Laboratory of Blockchain and Data Security, Zhejiang University)

Read More

SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by...

Peizhuo Lv (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Pan Li (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Shenchen Zhu (Institute of Information Engineering, Chinese Academy of Sciences, China;…

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)