Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

With the rapid advancement of diffusion-based image-generative models, the quality of generated images has become increasingly photorealistic. Moreover, with the release of high-quality pre-trained image-generative models, a growing number of users are downloading these pre-trained models to fine-tune them with downstream datasets for various image-generation tasks. However, employing such powerful pre-trained models in downstream tasks presents significant privacy leakage risks. In this paper, we propose the first scores-based membership inference attack framework tailored for recent diffusion models, and in the more stringent black-box access setting. Considering four distinct attack scenarios and three types of attacks, this framework is capable of targeting any popular conditional generator model, achieving high precision, evidenced by an impressive AUC of 0.95.

View More Papers

Reinforcement Unlearning

Dayong Ye (University of Technology Sydney), Tianqing Zhu (City University of Macau), Congcong Zhu (City University of Macau), Derui Wang (CSIRO’s Data61), Kun Gao (University of Technology Sydney), Zewei Shi (CSIRO’s Data61), Sheng Shen (Torrens University Australia), Wanlei Zhou (City University of Macau), Minhui Xue (CSIRO's Data61)

Read More

“I’m 73, you can’t expect me to have multiple...

Ashley Sheil (Munster Technological University), Jacob Camilleri (Munster Technological University), Michelle O Keeffe (Munster Technological University), Melanie Gruben (Munster Technological University), Moya Cronin (Munster Technological University) and Hazel Murray (Munster Technological University)

Read More