Minghong Fang (University of Louisville), Seyedsina Nabavirazavi (Florida International University), Zhuqing Liu (University of North Texas), Wei Sun (Wichita State University), Sundararaja Iyengar (Florida International University), Haibo Yang (Rochester Institute of Technology)

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server, without exchanging their private training data. However, the decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model by sending altered local model updates. To counter these attacks, a variety of aggregation rules designed to be resilient to Byzantine failures have been introduced. Nonetheless, these methods can still be vulnerable to sophisticated attacks or depend on unrealistic assumptions about the server. In this paper, we demonstrate that there is no need to design new Byzantine-robust aggregation rules; instead, FL can be secured by enhancing the robustness of well-established aggregation rules. To this end, we present FoundationFL, a novel defense mechanism against poisoning attacks. FoundationFL involves the server generating synthetic updates after receiving local model updates from clients. It then applies existing Byzantine-robust foundational aggregation rules, such as Trimmed-mean or Median, to combine clients' model updates with the synthetic ones. We theoretically establish the convergence performance of FoundationFL under Byzantine settings. Comprehensive experiments across several real-world datasets validate the efficiency of our FoundationFL method.

View More Papers

ICSQuartz: Scan Cycle-Aware and Vendor-Agnostic Fuzzing for Industrial Control...

Corban Villa (New York University Abu Dhabi), Constantine Doumanidis (New York University Abu Dhabi), Hithem Lamri (New York University Abu Dhabi), Prashant Hari Narayan Rajput (InterSystems), Michail Maniatakos (New York University Abu Dhabi)

Read More

Hitchhiking Vaccine: Enhancing Botnet Remediation With Remote Code Deployment...

Runze Zhang (Georgia Institute of Technology), Mingxuan Yao (Georgia Institute of Technology), Haichuan Xu (Georgia Institute of Technology), Omar Alrawi (Georgia Institute of Technology), Jeman Park (Kyung Hee University), Brendan Saltaformaggio (Georgia Institute of Technology)

Read More

Poster: Securing IoT Edge Devices: Applying NIST IR 8259A...

Rahul Choutapally, Konika Reddy Saddikuti, Solomon Berhe (University of the Pacific)

Read More

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

Rui Zeng (Zhejiang University), Xi Chen (Zhejiang University), Yuwen Pu (Zhejiang University), Xuhong Zhang (Zhejiang University), Tianyu Du (Zhejiang University), Shouling Ji (Zhejiang University)

Read More