Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University); Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Model inversion reverse-engineers input samples from a given model, and hence poses serious threats to information confidentiality. We propose a novel inversion technique based on StyleGAN, whose generator has a special architecture that forces the decomposition of an input to styles of various granularities such that the model can learn them separately in training. During sample generation, the generator transforms a latent value to parameters controlling these styles to compose a sample. In our inversion, given a target label of some subject model to invert (e.g., a private face based identity recognition model), our technique leverages a StyleGAN trained on public data from the same domain (e.g., a public human face dataset), uses the gradient descent or genetic search algorithm, together with distribution based clipping, to find a proper parameterization of the styles such that the generated sample is correctly classified to the target label (by the subject model) and recognized by humans. The results show that our inverted samples have high fidelity, substantially better than those by existing state-of-the-art techniques.

View More Papers

A Lightweight IoT Cryptojacking Detection Mechanism in Heterogeneous Smart...

Ege Tekiner (Florida International University), Abbas Acar (Florida International University), Selcuk Uluagac (Florida International University)

Read More

On Utility and Privacy in Synthetic Genomic Data

Bristena Oprisanu (UCL), Georgi Ganev (UCL & Hazy), Emiliano De Cristofaro (UCL)

Read More

Context-Sensitive and Directional Concurrency Fuzzing for Data-Race Detection

Zu-Ming Jiang (Tsinghua University), Jia-Ju Bai (Tsinghua University), Kangjie Lu (University of Minnesota), Shi-Min Hu (Tsinghua University)

Read More

Local and Central Differential Privacy for Robustness and Privacy...

Mohammad Naseri (University College London), Jamie Hayes (DeepMind), Emiliano De Cristofaro (University College London & Alan Turing Institute)

Read More