Ece Gumusel (University of Illinois Urbana-Champaign), Yueru Yan (Indiana University Bloomington), Ege Otenen (Indiana University Bloomington)

Interacting with Large Language Model (LLM) chatbots exposes users to new security and privacy challenges, yet little is known about how people perceive and manage these risks. While prior research has largely examined technical vulnerabilities, users’ perceptions of privacy—particularly in the United States, where regulatory protections are limited—remain underexplored. In this study, we surveyed 267 U.S.-based LLM users to understand their privacy perceptions, practices, and data-sharing preferences, and how demographics and prior LLM experience shape these behaviors. Results show low awareness of privacy policies, moderate concern over data handling, and reluctance to share sensitive information like social security or credit card numbers. Usage frequency and prior experience strongly influence comfort and control behaviors, while demographic factors shape disclosure patterns of certain personal data. These findings reveal privacy behaviors that diverge from traditional online practices and uncover nuanced trade-offs that could introduce security risks in LLM interactions. Building on these lessons, we provide actionable guidance for reducing user-related vulnerabilities and shaping effective policy and governance.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 132 [1] => 32 ) ) ) [post__not_in] => Array ( [0] => 23170 ) )

The Impact of Workload on Phishing Susceptibility: An Experiment

Sijie Zhuo (University of Auckland), Robert Biddle (University of Auckland and Carleton University, Ottawa), Lucas Betts, Nalin Asanka Gamagedara Arachchilage, Yun Sing Koh, Danielle Lottridge, Giovanni Russello (University of Auckland)

Read More

VPN Awareness and Misconceptions: A Comparative Study in Canadian...

Lachlan Moore, Tatsuya Mori (Waseda University, NICT)

Read More

Vision: “AccessFormer”: Feedback-Driven Access Control Policy

Sakuna Harinda Jayasundara, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello (University of Auckland)

Read More

What Makes Phishing Simulation Campaigns (Un)Acceptable? A Vignette Experiment

Jasmin Schwab (German Aerospace Center (DLR)), Alexander Nussbaum (University of the Bundeswehr Munich), Anastasia Sergeeva (University of Luxembourg), Florian Alt (University of the Bundeswehr Munich and Ludwig Maximilian University of Munich), and Verena Distler (Aalto University)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)