Jasmin Schwab (German Aerospace Center (DLR)), Alexander Nussbaum (University of the Bundeswehr Munich), Anastasia Sergeeva (University of Luxembourg), Florian Alt (University of the Bundeswehr Munich and Ludwig Maximilian University of Munich), and Verena Distler (Aalto University)

Organizations depend on their employees’ long-term cooperation to help protect the organization from cybersecurity threats. Phishing attacks are the entry point for harmful followup attacks. The acceptance of training measures is thus crucial. Many organizations use simulated phishing campaigns to train employees to adopt secure behaviors. We conducted a preregistered vignette experiment (N=793), investigating the factors that make a simulated phishing campaign seem (un)acceptable, and their influence on employees’ intention to manipulate the campaign. In the experiment, we varied whether employees gave prior consent, whether the phishing email promised a financial incentive and the consequences for employees who clicked on the phishing link. We found that employees’ prior consent positively affected the acceptance of a simulated phishing campaign. The consequences of “employee interview” and “termination of the work contract” negatively affected acceptance. We found no statistically significant effects of consent, monetary incentive, and consequences on manipulation probability. Our results shed light on the factors influencing the acceptance of simulated phishing campaigns. Based on our findings, we recommend that organizations prioritize obtaining informed consent from employees before including them in simulated phishing campaigns and that they clearly describe their consequences. Organizations should carefully evaluate the acceptance of simulated phishing campaigns and consider alternative anti-phishing measures.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 118 [1] => 32 ) ) ) [post__not_in] => Array ( [0] => 20960 ) )

Vision: An Exploration of Online Toxic Content Against Refugees

Arjun Arunasalam (Purdue University), Habiba Farrukh (University of California, Irvine), Eliz Tekcan (Purdue University), Z. Berkay Celik (Purdue University)

Read More

Poster: FORESIGHT, A Unified Framework for Threat Modeling and...

ChaeYoung Kim (Seoul Women's University), Kyounggon Kim (Naif Arab University for Security Sciences)

Read More

Programmer's Perception of Sensitive Information in Code

Xinyao Ma, Ambarish Aniruddha Gurjar, Anesu Christopher Chaora, Tatiana R Ringenberg, L. Jean Camp (Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)