Shanghao Shi (Virginia Tech), Ning Wang (University of South Florida), Yang Xiao (University of Kentucky), Chaoyu Zhang (Virginia Tech), Yi Shi (Virginia Tech), Y. Thomas Hou (Virginia Polytechnic Institute and State University), Wenjing Lou (Virginia Polytechnic Institute and State University)

Federated learning is known for its capability to safeguard the participants' data privacy. However, recently emerged model inversion attacks (MIAs) have shown that a malicious parameter server can reconstruct individual users' local data samples from model updates. The state-of-the-art attacks either rely on computation-intensive iterative optimization methods to reconstruct each input batch, making scaling difficult, or involve the malicious parameter server adding extra modules before the global model architecture, rendering the attacks too conspicuous and easily detectable.

To overcome these limitations, we propose Scale-MIA, a novel MIA capable of efficiently and accurately reconstructing local training samples from the aggregated model updates, even when the system is protected by a robust secure aggregation (SA) protocol. Scale-MIA utilizes the inner architecture of models and identifies the latent space as the critical layer for breaching privacy. Scale-MIA decomposes the complex reconstruction task into an innovative two-step process. The first step is to reconstruct the latent space representations (LSRs) from the aggregated model updates using a closed-form inversion mechanism, leveraging specially crafted linear layers. Then in the second step, the LSRs are fed into a fine-tuned generative decoder to reconstruct the whole input batch.

We implemented Scale-MIA on commonly used machine learning models and conducted comprehensive experiments across various settings. The results demonstrate that Scale-MIA achieves excellent performance on different datasets, exhibiting high reconstruction rates, accuracy, and attack efficiency on a larger scale compared to state-of-the-art MIAs. Our code is available at https://github.com/unknown123489/Scale-MIA.

View More Papers

Rediscovering Method Confusion in Proposed Security Fixes for Bluetooth

Maximilian von Tschirschnitz (Technical University of Munich), Ludwig Peuckert (Technical University of Munich), Moritz Buhl (Technical University of Munich), Jens Grossklags (Technical University of Munich)

Read More

Rethinking Trust in Forge-Based Git Security

Aditya Sirish A Yelgundhalli (New York University), Patrick Zielinski (New York University), Reza Curtmola (New Jersey Institute of Technology), Justin Cappos (New York University)

Read More

A Formal Approach to Multi-Layered Privileges for Enclaves

Ganxiang Yang (Shanghai Jiao Tong University), Chenyang Liu (Shanghai Jiao Tong University), Zhen Huang (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), Hongfei Fu (Shanghai Jiao Tong University), Yuanyuan Zhang (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University)

Read More

Inspecting Compiler Optimizations on Mixed Boolean Arithmetic Obfuscation

Rachael Little, Dongpeng Xu (University of New Hampshire)

Read More