Chunyi Zhou (Nanjing University of Science and Technology), Yansong Gao (Nanjing University of Science and Technology), Anmin Fu (Nanjing University of Science and Technology), Kai Chen (Chinese Academy of Science), Zhiyang Dai (Nanjing University of Science and Technology), Zhi Zhang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Yuqing Zhang (University of Chinese Academy of Science)

Federated learning (FL) trains a global model across a number of decentralized users, each with a local dataset. Compared to traditional centralized learning, FL does not require direct access to local datasets and thus aims to mitigate data privacy concerns. However, data privacy leakage in FL still exists due to inference attacks, including membership inference, property inference, and data inversion.

In this work, we propose a new type of privacy inference attack, coined Preference Profiling Attack (PPA), that accurately profiles the private preferences of a local user, e.g., most liked (disliked) items from the client's online shopping and most common expressions from the user's selfies. In general, PPA can profile top-$k$ (i.e., $k$ = $1, 2, 3$ and $k = 1$ in particular) preferences contingent on the local client (user)'s characteristics. Our key insight is that the gradient variation of a local user's model has a distinguishable sensitivity to the sample proportion of a given class, especially the majority (minority) class. By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus textit{the user's preference of the class} is exposed. The inherent statistical heterogeneity of FL further facilitates PPA. We have extensively evaluated the PPA's effectiveness using four datasets (MNIST, CIFAR10, RAF-DB and Products-10K). Our results show that PPA achieves 90% and 98% top-$1$ attack accuracy to the MNIST and CIFAR10, respectively. More importantly, in real-world commercial scenarios of shopping (i.e., Products-10K) and social network (i.e., RAF-DB), PPA gains a top-$1$ attack accuracy of 78% in the former case to infer the most ordered items (i.e., as a commercial competitor), and 88% in the latter case to infer a victim user's most often facial expressions, e.g., disgusted. The top-$3$ attack accuracy and top-$2$ accuracy is up to 88% and 100% for the Products-10K and RAF-DB, respectively. We also show that PPA is insensitive to the number of FL's local users (up to 100 we tested) and local training epochs (up to 20 we tested) used by a user. Although existing countermeasures such as dropout and differential privacy protection can lower the PPA's accuracy to some extent, they unavoidably incur notable deterioration to the global model. The source code is available at https://github.com/PPAattack.

View More Papers

Machine Unlearning of Features and Labels

Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Read More

Applying Accessibility Metrics to Measure the Threat Landscape for...

John Breton, AbdelRahman Abdou (Carleton University)

Read More

Cryptographic Oracle-based Conditional Payments

Varun Madathil (North Carolina State University), Sri Aravinda Krishnan Thyagarajan (NTT Research), Dimitrios Vasilopoulos (IMDEA Software Institute), Lloyd Fournier (None), Giulio Malavolta (Max Planck Institute for Security and Privacy), Pedro Moreno-Sanchez (IMDEA Software Institute)

Read More

Faster Secure Comparisons with Offline Phase for Efficient Private...

Florian Kerschbaum (University of Waterloo), Erik-Oliver Blass (Airbus), Rasoul Akhavan Mahdavi (University of Waterloo)

Read More