Kaiyuan Zhang (Purdue University), Siyuan Cheng (Purdue University), Guangyu Shen (Purdue University), Bruno Ribeiro (Purdue University), Shengwei An (Purdue University), Pin-Yu Chen (IBM Research AI), Xiangyu Zhang (Purdue University), Ninghui Li (Purdue University)

Federated learning collaboratively trains a neural network on a global server, where each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data.
The process of sending these model updates may leak client's private data information.
Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors. Recently, researchers have proposed advanced gradient inversion techniques that existing defenses struggle to handle effectively.
In this work, we present a novel defense tailored for large neural network models. Our defense capitalizes on the high dimensionality of the model parameters to perturb gradients within a textit{subspace orthogonal} to the original gradient. By leveraging cold posteriors over orthogonal subspaces, our defense implements a refined gradient update mechanism. This enables the selection of an optimal gradient that not only safeguards against gradient inversion attacks but also maintains model utility.
We conduct comprehensive experiments across three different datasets and evaluate our defense against various state-of-the-art attacks and defenses.

View More Papers

LLM-xApp: A Large Language Model Empowered Radio Resource Management...

Xingqi Wu (University of Michigan-Dearborn), Junaid Farooq (University of Michigan-Dearborn), Yuhui Wang (University of Michigan-Dearborn), Juntao Chen (Fordham University)

Read More

URVFL: Undetectable Data Reconstruction Attack on Vertical Federated Learning

Duanyi Yao (Hong Kong University of Science and Technology), Songze Li (Southeast University), Xueluan Gong (Wuhan University), Sizai Hou (Hong Kong University of Science and Technology), Gaoning Pan (Hangzhou Dianzi University)

Read More

GadgetMeter: Quantitatively and Accurately Gauging the Exploitability of Speculative...

Qi Ling (Purdue University), Yujun Liang (Tsinghua University), Yi Ren (Tsinghua University), Baris Kasikci (University of Washington and Google), Shuwen Deng (Tsinghua University)

Read More

RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial...

Dzung Pham (University of Massachusetts Amherst), Shreyas Kulkarni (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More