Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Federated Learning (FL) has evolved into a pivotal paradigm for collaborative machine learning, enabling a centralised server to compute a global model by aggregating the local models trained by clients. However, the distributed nature of FL renders it susceptible to poisoning attacks that exploit its linear aggregation rule called FEDAVG. To address this vulnerability, FEDQV has been recently introduced as a superior alternative to FEDAVG, specifically designed to mitigate poisoning attacks by taxing more than linearly deviating clients. Nevertheless, FEDQV remains exposed to privacy attacks that aim to infer private information from clients’ local models. To counteract such privacy threats, a well-known approach is to use a Secure Aggregation (SA) protocol to ensure that the server is unable to inspect individual trained models as it aggregates them. In this work, we show how to implement SA on top of FEDQV in order to address both poisoning and privacy attacks. We mount several privacy attacks against FEDQV and demonstrate the effectiveness of SA in countering them.

View More Papers

Securing EV charging system against Physical-layer Signal Injection Attack...

Soyeon Son (Korea University) Kyungho Joo (Korea University) Wonsuk Choi (Korea University) Dong Hoon Lee (Korea University)

Read More

More Lightweight, yet Stronger: Revisiting OSCORE’s Replay Protection

Konrad-Felix Krentz (Uppsala University), Thiemo Voigt (Uppsala University, RISE Computer Science)

Read More

GTrans: Graph Transformer-Based Obfuscation-resilient Binary Code Similarity Detection

Yun Zhang (Hunan University), Yuling Liu (Hunan University), Ge Cheng (Xiangtan University), Bo Ou (Hunan University)

Read More