Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (armasuisse Science and Technology), Bartlomiej Surma (CISPA Helmholtz Center for Information Security), Praveen Manoharan (CISPA Helmholtz Center for Information Security), Jilles Vreeken (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Social graphs derived from online social interactions contain a wealth of information that is nowadays extensively used by both industry and academia. However, as social graphs contain sensitive information, they need to be properly anonymized before release. Most of the existing graph anonymization mechanisms rely on the perturbation of the original graph’s edge set. In this paper, we identify a fundamental weakness of these mechanisms: They neglect the strong structural proximity between friends in social graphs, thus add implausible fake edges for anonymization.
To exploit this weakness, we first propose a metric to quantify an edge’s plausibility by relying on graph embedding. Extensive experiments on three real-life social network datasets demonstrate that our plausibility metric can very effectively differentiate fake edges from original edges with AUC values above 0.95 in most of the cases. We then rely on a Gaussian mixture model to automatically derive the threshold on the edge plausibility values to determine whether an edge is fake, which enables us to recover to a large extent the original graph from the anonymized graph. Then, we demonstrate that our graph recovery attack jeopardizes the privacy guarantees provided by the considered graph anonymization mechanisms.
To mitigate this vulnerability, we propose a method to generate fake yet plausible edges given the graph structure and incorporate it into the existing anonymization mechanisms. Our evaluation demonstrates that the enhanced mechanisms decrease the chances of graph recovery, reduce the success of graph de-anonymization (up to 30%), and provide even better utility than the existing anonymization mechanisms.

View More Papers

Detecting Probe-resistant Proxies

Sergey Frolov (University of Colorado Boulder), Jack Wampler (University of Colorado Boulder), Eric Wustrow (University of Colorado Boulder)

Read More

Mind the Portability: A Warriors Guide through Realistic Profiled...

Shivam Bhasin (Nanyang Technological University), Anupam Chattopadhyay (Nanyang Technological University), Annelie Heuser (Univ Rennes, Inria, CNRS, IRISA), Dirmanto Jap (Nanyang Technological University), Stjepan Picek (Delft University of Technology), Ritu Ranjan Shrivastwa (Secure-IC)

Read More

HFL: Hybrid Fuzzing on the Linux Kernel

Kyungtae Kim (Purdue University), Dae R. Jeong (KAIST), Chung Hwan Kim (NEC Labs America), Yeongjin Jang (Oregon State University), Insik Shin (KAIST), Byoungyoung Lee (Seoul National University)

Read More

Not All Coverage Measurements Are Equal: Fuzzing by Coverage...

Yanhao Wang (Institute of Software, Chinese Academy of Sciences), Xiangkun Jia (Pennsylvania State University), Yuwei Liu (Institute of Software, Chinese Academy of Sciences), Kyle Zeng (Arizona State University), Tiffany Bao (Arizona State University), Dinghao Wu (Pennsylvania State University), Purui Su (Institute of Software, Chinese Academy of Sciences)

Read More