Minghong Fang (University of Louisville), Seyedsina Nabavirazavi (Florida International University), Zhuqing Liu (University of North Texas), Wei Sun (Wichita State University), Sundararaja Iyengar (Florida International University), Haibo Yang (Rochester Institute of Technology)

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server, without exchanging their private training data. However, the decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model by sending altered local model updates. To counter these attacks, a variety of aggregation rules designed to be resilient to Byzantine failures have been introduced. Nonetheless, these methods can still be vulnerable to sophisticated attacks or depend on unrealistic assumptions about the server. In this paper, we demonstrate that there is no need to design new Byzantine-robust aggregation rules; instead, FL can be secured by enhancing the robustness of well-established aggregation rules. To this end, we present FoundationFL, a novel defense mechanism against poisoning attacks. FoundationFL involves the server generating synthetic updates after receiving local model updates from clients. It then applies existing Byzantine-robust foundational aggregation rules, such as Trimmed-mean or Median, to combine clients' model updates with the synthetic ones. We theoretically establish the convergence performance of FoundationFL under Byzantine settings. Comprehensive experiments across several real-world datasets validate the efficiency of our FoundationFL method.

View More Papers

The State of https Adoption on the Web

Christoph Kerschbaumer (Mozilla Corporation), Frederik Braun (Mozilla Corporation), Simon Friedberger (Mozilla Corporation), Malte Jürgens (Mozilla Corporation)

Read More

A Systematic Evaluation of Novel and Existing Cache Side...

Fabian Rauscher (Graz University of Technology), Carina Fiedler (Graz University of Technology), Andreas Kogler (Graz University of Technology), Daniel Gruss (Graz University of Technology)

Read More

Generating API Parameter Security Rules with LLM for API...

Jinghua Liu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of…

Read More

On-demand RFID: Improving Privacy, Security, and User Trust in...

Youngwook Do (JPMorganChase and Georgia Institute of Technology), Tingyu Cheng (Georgia Institute of Technology and University of Notre Dame), Yuxi Wu (Georgia Institute of Technology and Northeastern University), HyunJoo Oh(Georgia Institute of Technology), Daniel J. Wilson (Northeastern University), Gregory D. Abowd (Northeastern University), Sauvik Das (Carnegie Mellon University)

Read More