Minghong Fang (University of Louisville), Seyedsina Nabavirazavi (Florida International University), Zhuqing Liu (University of North Texas), Wei Sun (Wichita State University), Sundararaja Iyengar (Florida International University), Haibo Yang (Rochester Institute of Technology)

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server, without exchanging their private training data. However, the decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model by sending altered local model updates. To counter these attacks, a variety of aggregation rules designed to be resilient to Byzantine failures have been introduced. Nonetheless, these methods can still be vulnerable to sophisticated attacks or depend on unrealistic assumptions about the server. In this paper, we demonstrate that there is no need to design new Byzantine-robust aggregation rules; instead, FL can be secured by enhancing the robustness of well-established aggregation rules. To this end, we present FoundationFL, a novel defense mechanism against poisoning attacks. FoundationFL involves the server generating synthetic updates after receiving local model updates from clients. It then applies existing Byzantine-robust foundational aggregation rules, such as Trimmed-mean or Median, to combine clients' model updates with the synthetic ones. We theoretically establish the convergence performance of FoundationFL under Byzantine settings. Comprehensive experiments across several real-world datasets validate the efficiency of our FoundationFL method.

View More Papers

Defending Against Membership Inference Attacks on Iteratively Pruned Deep...

Jing Shang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Kailun Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University), Nan Jiang (Beijing University of Technology), Md Armanuzzaman (Northeastern University), Ziming Zhao (Northeastern University)

Read More

Rediscovering Method Confusion in Proposed Security Fixes for Bluetooth

Maximilian von Tschirschnitz (Technical University of Munich), Ludwig Peuckert (Technical University of Munich), Moritz Buhl (Technical University of Munich), Jens Grossklags (Technical University of Munich)

Read More

Sheep's Clothing, Wolf's Data: Detecting Server-Induced Client Vulnerabilities in...

Fangming Gu (Institute of Information Engineering, Chinese Academy of Sciences), Qingli Guo (Institute of Information Engineering, Chinese Academy of Sciences), Jie Lu (Institute of Computing Technology, Chinese Academy of Sciences), Qinghe Xie (Institute of Information Engineering, Chinese Academy of Sciences), Beibei Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Kangjie Lu (University of Minnesota),…

Read More