Jonas Böhler (SAP Security Research), Florian Kerschbaum (University of Waterloo)

In distributed private learning, e.g., data analysis, machine learning, and enterprise benchmarking, it is commonplace for two parties with confidential data sets to compute statistics over their combined data. The median is an important robust statistical method used in enterprise benchmarking, e.g., companies compare typical employee salaries, insurance companies use median life expectancy to adjust insurance premiums, banks compare credit scores of their customers, and financial regulators estimate risks based on loan exposures.

The exact median can be computed securely, however, it leaks information about the private data. To protect the data sets, we securely compute a differentially private median over the joint data set via the exponential mechanism. The exponential mechanism has a runtime linear in the data universe size and efficiently sampling it is non-trivial. Local differential privacy, where each user shares locally perturbed data with an untrusted server, is often used in private learning but does not provide the same utility as the central model, where noise is only applied once by a trusted server.

We present an efficient secure computation of a differentially private median of the union of two large, confidential data sets. Our protocol has a runtime sublinear in the size of the data universe and utility like the central model without a trusted third party. We use dynamic programming with a static, i.e., data-independent, access pattern, achieving low complexity of the secure computation circuit. We provide a comprehensive evaluation with a large real-world data set with a practical runtime of less than 5 seconds for millions of records even with large network delay of 80ms.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 39 ) ) ) [post__not_in] => Array ( [0] => 5876 ) )

Data-Driven Debugging for Functional Side Channels

Saeid Tizpaz-Niari (University of Colorado Boulder), Pavol Černý (TU Wien), Ashutosh Trivedi (University of Colorado Boulder)

Read More

BLAZE: Blazing Fast Privacy-Preserving Machine Learning

Arpita Patra (Indian Institute of Science, Bangalore), Ajith Suresh (Indian Institute of Science, Bangalore)

Read More

FUSE: Finding File Upload Bugs via Penetration Testing

Taekjin Lee (KAIST, ETRI), Seongil Wi (KAIST), Suyoung Lee (KAIST), Sooel Son (KAIST)

Read More

HotFuzz: Discovering Algorithmic Denial-of-Service Vulnerabilities Through Guided Micro-Fuzzing

William Blair (Boston University), Andrea Mambretti (Northeastern University), Sajjad Arshad (Northeastern University), Michael Weissbacher (Northeastern University), William Robertson (Northeastern University), Engin Kirda (Northeastern University), Manuel Egele (Boston University)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)