Yu Zheng (University of California, Irvine), Chenang Li (University of California, Irvine), Zhou Li (University of California, Irvine), Qingsong Wang (University of California, San Diego)

Differential privacy (DP) has been integrated into graph neural networks (GNNs) to protect sensitive structural information, e.g., edges, nodes, and associated features across various applications. A prominent approach is to perturb the message-passing process, which forms the core of most GNN architectures. However, existing methods typically incur a privacy cost that grows linearly with the number of layers (e.g., GAP published in Usenix Security’23), ultimately requiring excessive noise to maintain a reasonable privacy level. This limitation becomes particularly problematic when multi-layer GNNs, which have shown better performance than one-layer GNN, are used to process graph data with sensitive information.

In this paper, we theoretically establish that the privacy budget converges with respect to the number of layers by applying privacy amplification techniques to the message-passing process, exploiting the contractive properties inherent to standard GNN operations. Motivated by this analysis, we propose a simple yet effective Contractive Graph Layer (CGL) that ensures the contractiveness required for theoretical guarantees while preserving model utility. Our framework, CARIBOU, supports both training and inference, equipped with a contractive aggregation module, a privacy allocation module, and a privacy auditing module. Experimental evaluations demonstrate that CARIBOU significantly improves the privacy-utility trade-off and achieves superior performance in privacy auditing tasks.

View More Papers

Robust Fraud Transaction Detection: A Two-Player Game Approach

Qi Tan (Shenzhen University), Yi Zhao (Beijing Institute of Technology), Laizhong Cui (Shenzhen University), Qi Li (Tsinghua University), Ming Zhu (Tsinghua University), Xing Fu (Ant Group), Weiqiang Wang (Ant Group), Xiaotong Lin (Ant Group), Ke Xu (Tsinghua University)

Read More

Rounding-Guided Backdoor Injection in Deep Learning Model Quantization

Xiangxiang Chen (Zhejiang University), Peixin Zhang (Singapore Management University), Jun Sun (Singapore Management University), Wenhai Wang (Zhejiang University), Jingyi Wang (Zhejiang University)

Read More

Revealing The Secret Power: How Algorithms Can Influence Content...

Alessandro Galeazzi (University of Padua), Pujan Paudel (Boston University), Mauro Conti (University of Padua), Emiliano De Cristofaro (UC Riverside), Gianluca Stringhini (Boston University)

Read More