Xinyao Ma, Ambarish Aniruddha Gurjar, Anesu Christopher Chaora, Tatiana R Ringenberg, L. Jean Camp (Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington)

This study delves into the crucial role of developers in identifying privacy sensitive information in code. The context informs the research of diverse global data protection regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). It specifically investigates programmers’ ability to discern the sensitivity level of data processing in code, a task of growing importance given the increasing legislative demands for data privacy.

We conducted an online card-sorting experiment to explore how the participating programmers across a range of expertise perceive the sensitivity of variable names in code snippets. Our study evaluates the accuracy, feasibility, and reliability of our participating programmers in determining what constitutes a ’sensitive’ variable. We further evaluate if there is a consensus among programmers, how their level of security knowledge influences any consensus, and whether any consensus or impact of expertise is consistent across different categories of variables. Our findings reveal a lack of consistency among participants regarding the sensitivity of processing different types of data, as indicated by snippets of code with distinct variable names. There remains a significant divergence in opinions, particularly among those with more technical expertise. As technical expertise increases, consensus decreases across the various categories of sensitive data. This study not only sheds light on the current state of programmers’ privacy awareness but also motivates the need for developing better industry practices and tools for automatically identifying sensitive data in code.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 104 [1] => 32 ) ) ) [post__not_in] => Array ( [0] => 17600 ) )

Towards Automated Regulation Analysis for Effective Privacy Compliance

Sunil Manandhar (IBM T.J. Watson Research Center), Kapil Singh (IBM T.J. Watson Research Center), Adwait Nadkarni (William & Mary)

Read More

Facilitating Threat Modeling by Leveraging Large Language Models

Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Read More

ActiveDaemon: Unconscious DNN Dormancy and Waking Up via User-specific...

Ge Ren (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Libo Chen (Shanghai Jiao Tong University), Kui Ren (Zhejiang University)

Read More

On the Feasibility of CubeSats Application Sandboxing for Space...

Gabriele Marra (CISPA Helmholtz Center for Information Security), Ulysse Planta (CISPA Helmholtz Center for Information Security and Saarbrücken Graduate School of Computer Science), Philipp Wüstenberg (Chair of Space Technology, Technische Universität Berlin), Ali Abbasi (CISPA Helmholtz Center for Information Security)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)