Virat Shejwalkar (UMass Amherst), Amir Houmansadr (UMass Amherst)

Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data.

However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process.

In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in $1.5times$ to $60times$ higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks.

Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called emph{divide-and-conquer} (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks,
specifically, it is $2.5times$ to $12times$ more resilient in our experiments with different datasets and models.

View More Papers

(Short) WIP: Deployability Improvement, Stealthiness User Study, and Safety...

Takami Sato, Junjie Shen, Ningfei Wang (UC Irvine), Yunhan Jia (ByteDance), Xue Lin (Northeastern University), and Qi Alfred Chen (UC Irvine)

Read More

Who's Hosting the Block Party? Studying Third-Party Blockage of...

Marius Steffens (CISPA Helmholtz Center for Information Security), Marius Musch (TU Braunschweig), Martin Johns (TU Braunschweig), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More