Haotian Zhu (Nanjing University of Science and Technology), Shuchao Pang (Nanjing University of Science and Technology), Zhigang Lu (Western Sydney University), Yongbin Zhou (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61)

Text-to-image diffusion model's fine-tuning technology allows people to easily generate a large number of customized photos using limited identity images. Although this technology is easy to use, its misuse could lead to violations of personal portraits and privacy, with false information and harmful content potentially causing further harm to individuals. Several methods have been proposed to protect faces from customization via adding protective noise to user images by disrupting the fine-tuned models.

Unfortunately, simple pre-processing techniques like JPEG compression, a normal pre-processing operation performed by modern social networks, can easily erase the protective effects of existing methods. To counter JPEG compression and other potential pre-processing, we propose GAP-Diff, a framework of underline{G}enerating data with underline{A}dversarial underline{P}erturbations for text-to-image underline{Diff}usion models using unsupervised learning-based optimization, including three functional modules. Specifically, our framework learns robust representations against JPEG compression by backpropagating gradient information through a pre-processing simulation module while learning adversarial characteristics for disrupting fine-tuned text-to-image diffusion models. Furthermore, we achieve an adversarial mapping from clean images to protected images by designing adversarial losses against these fine-tuning methods and JPEG compression, with stronger protective noises within milliseconds. Facial benchmark experiments, compared to state-of-the-art protective methods, demonstrate that GAP-Diff significantly enhances the resistance of protective noise to JPEG compression, thereby better safeguarding user privacy and copyrights in the digital world.

View More Papers

Compiled Models, Built-In Exploits: Uncovering Pervasive Bit-Flip Attack Surfaces...

Yanzuo Chen (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

Read More

What Makes Phishing Simulation Campaigns (Un)Acceptable? A Vignette Experiment

Jasmin Schwab (German Aerospace Center (DLR)), Alexander Nussbaum (University of the Bundeswehr Munich), Anastasia Sergeeva (University of Luxembourg), Florian Alt (University of the Bundeswehr Munich and Ludwig Maximilian University of Munich), and Verena Distler (Aalto University)

Read More

Wallbleed: A Memory Disclosure Vulnerability in the Great Firewall...

Shencha Fan (GFW Report), Jackson Sippe (University of Colorado Boulder), Sakamoto San (Shinonome Lab), Jade Sheffey (UMass Amherst), David Fifield (None), Amir Houmansadr (UMass Amherst), Elson Wedwards (None), Eric Wustrow (University of Colorado Boulder)

Read More

Automatic Library Fuzzing through API Relation Evolvement

Jiayi Lin (The University of Hong Kong), Qingyu Zhang (The University of Hong Kong), Junzhe Li (The University of Hong Kong), Chenxin Sun (The University of Hong Kong), Hao Zhou (The Hong Kong Polytechnic University), Changhua Luo (The University of Hong Kong), Chenxiong Qian (The University of Hong Kong)

Read More