Ruizhe Wang (University of Waterloo), Roberta De Viti (MPI-SWS), Aarushi Dubey (University of Washington), Elissa Redmiles (Georgetown University)

The voluntary donation of private health information for altruistic purposes, such as supporting research advancements, is a common practice. However, concerns about data misuse and leakage may deter people from donating their information. Privacy Enhancing Technologies (PETs) aim to alleviate these concerns and, in turn, allow for safe and private data sharing. This study conducts a vignette survey (N = 494) with participants recruited from Prolific to examine the willingness of US-based people to donate medical data for developing new treatments under four general guarantees offered across PETs: data expiration, anonymization, purpose restriction, and access control. The study explores two mechanisms for verifying these guarantees: self-auditing and expert auditing, and controls for the impact of confounds including demographics and two types of data collectors: for-profit and non-profit institutions.

Our findings reveal that respondents hold such high expectations of privacy from non-profit entities a priori that explicitly outlining privacy protections has little impact on their overall perceptions. In contrast, offering privacy guarantees elevates respondents’ expectations of privacy for for-profit entities, bringing them nearly in line with those for non-profit organizations. Further, while the technical community has suggested audits as a mechanism to increase trust in PET guarantees, we observe limited effect from transparency about such audits. We emphasize the risks associated with these findings and underscore the critical need for future interdisciplinary research efforts to bridge the gap between the technical community’s and end-users’ perceptions regarding the effectiveness of auditing PETs.

View More Papers

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes...

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)

Read More

Light2Lie: Detecting Deepfake Images Using Physical Reflectance Laws

Kavita Kumari (Technical University of Darmstadt), Sasha Behrouzi (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Lessons Learned through Customer Discovery in a Provenance-based Security...

Akul Goyal (Provenance Security, Inc.), Adam Bates (Provenance Security, Inc.)

Read More