Minghong Fang (University of Louisville), Seyedsina Nabavirazavi (Florida International University), Zhuqing Liu (University of North Texas), Wei Sun (Wichita State University), Sundararaja Iyengar (Florida International University), Haibo Yang (Rochester Institute of Technology)

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server, without exchanging their private training data. However, the decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model by sending altered local model updates. To counter these attacks, a variety of aggregation rules designed to be resilient to Byzantine failures have been introduced. Nonetheless, these methods can still be vulnerable to sophisticated attacks or depend on unrealistic assumptions about the server. In this paper, we demonstrate that there is no need to design new Byzantine-robust aggregation rules; instead, FL can be secured by enhancing the robustness of well-established aggregation rules. To this end, we present FoundationFL, a novel defense mechanism against poisoning attacks. FoundationFL involves the server generating synthetic updates after receiving local model updates from clients. It then applies existing Byzantine-robust foundational aggregation rules, such as Trimmed-mean or Median, to combine clients' model updates with the synthetic ones. We theoretically establish the convergence performance of FoundationFL under Byzantine settings. Comprehensive experiments across several real-world datasets validate the efficiency of our FoundationFL method.

View More Papers

A Key-Driven Framework for Identity-Preserving Face Anonymization

Miaomiao Wang (Shanghai University), Guang Hua (Singapore Institute of Technology), Sheng Li (Fudan University), Guorui Feng (Shanghai University)

Read More

On the Realism of LiDAR Spoofing Attacks against Autonomous...

Takami Sato (University of California, Irvine), Ryo Suzuki (Keio University), Yuki Hayakawa (Keio University), Kazuma Ikeda (Keio University), Ozora Sako (Keio University), Rokuto Nagata (Keio University), Ryo Yoshida (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

Power-Related Side-Channel Attacks using the Android Sensor Framework

Mathias Oberhuber (Graz University of Technology), Martin Unterguggenberger (Graz University of Technology), Lukas Maar (Graz University of Technology), Andreas Kogler (Graz University of Technology), Stefan Mangard (Graz University of Technology)

Read More

BARBIE: Robust Backdoor Detection Based on Latent Separability

Hanlei Zhang (Zhejiang University), Yijie Bai (Zhejiang University), Yanjiao Chen (Zhejiang University), Zhongming Ma (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More