Mohammad Naseri (University College London), Jamie Hayes (DeepMind), Emiliano De Cristofaro (University College London & Alan Turing Institute)

Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates. Alas, this is not necessarily free from privacy and robustness vulnerabilities, e.g., via membership, property, and backdoor attacks. This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL. To this end, we present a first-of-its-kind evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness.

Our experiments show that both DP variants do defend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses. DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference. Overall, our work provides a comprehensive, re-usable measurement methodology to quantify the trade-offs between robustness/privacy and utility in differentially private FL.

View More Papers

MIRROR: Model Inversion for Deep Learning Network with High...

Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University), Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Read More

WIP: Interrupt Attack on TEE-protected Robotic Vehicles

Mulong Luo (Cornell University) and G. Edward Suh (Cornell University)

Read More

Euler: Detecting Network Lateral Movement via Scalable Temporal Graph...

Isaiah J. King (The George Washington University), H. Howie Huang (The George Washington University)

Read More