Bristena Oprisanu (UCL), Georgi Ganev (UCL & Hazy), Emiliano De Cristofaro (UCL)

The availability of genomic data is essential to progress in biomedical research, personalized medicine, etc. However, its extreme sensitivity makes it problematic, if not outright impossible, to publish or share it. As a result, several initiatives have been launched to experiment with synthetic genomic data, e.g., using generative models to learn the underlying distribution of the real data and generate artificial datasets that preserve its salient characteristics without exposing it. This paper provides the first evaluation of the utility and the privacy protection of six state-of-the-art models for generating synthetic genomic data. We assess the performance of the synthetic data on several common tasks, such as allele population statistics and linkage disequilibrium. We then measure privacy through the lens of membership inference attacks, i.e., inferring whether a record was part of the training data.

Our experiments show that no single approach to generate synthetic genomic data yields both high utility and strong privacy across the board. Also, the size and nature of the training dataset matter. Moreover, while some combinations of datasets and models produce synthetic data with distributions close to the real data, there often are target data points that are vulnerable to membership inference. Looking forward, our techniques can be used by practitioners to assess the risks of deploying synthetic genomic data in the wild and serve as a benchmark for future work.

View More Papers

MobFuzz: Adaptive Multi-objective Optimization in Gray-box Fuzzing

Gen Zhang (National University of Defense Technology), Pengfei Wang (National University of Defense Technology), Tai Yue (National University of Defense Technology), Xiangdong Kong (National University of Defense Technology), Shan Huang (National University of Defense Technology), Xu Zhou (National University of Defense Technology), Kai Lu (National University of Defense Technology)

Read More

Multi-Certificate Attacks against Proof-of-Elapsed-Time and Their Countermeasures

Huibo Wang (Baidu Security), Guoxing Chen (Shanghai Jiao Tong University), Yinqian Zhang (Southern University of Science and Technology), Zhiqiang Lin (Ohio State University)

Read More

DRAWN APART: A Device Identification Technique based on Remote...

Tomer Laor (Ben-Gurion Univ. of the Negev), Naif Mehanna (Univ. Lille, CNRS, Inria), Antonin Durey (Univ. Lille, CNRS, Inria), Vitaly Dyadyuk (Ben-Gurion Univ. of the Negev), Pierre Laperdrix (Univ. Lille, CNRS, Inria), Clémentine Maurice (Univ. Lille, CNRS, Inria), Yossi Oren (Ben-Gurion Univ. of the Negev), Romain Rouvoy (Univ. Lille, CNRS, Inria / IUF), Walter Rudametkin…

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)