Lea Duesterwald (Carnegie Mellon University), Ian Yang (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

Human actions or lack thereof contribute to a large majority of cybersecurity incidents. Traditionally, when looking for advice on cybersecurity questions, people have turned to search engines or social sites like Reddit. The rapid adoption of chatbot technologies is offering a potentially more direct way of getting similar advice. Initial research suggests, however, that while chatbot answers to common cybersecurity questions tend to be fairly accurate, they may not be very effective as they often fall short on other desired qualities such as understandability, actionability, or motivational power. Research in this area thus far has been limited to the evaluation by researchers themselves on a small number of synthetic questions. This article reports on what we believe to be the first in situ evaluation of a cybersecurity Question Answering (QA) assistant. We also evaluate a prompt engineered to help the cybersecurity QA assistant generate more effective answers. The study involved a 10-day deployment of a cybersecurity QA assistant in the form of a Chrome extension. Collectively, participants (N=51) evaluated answers generated by the assistant to over 1,000 cybersecurity questions they submitted as part of their regular day-to-day activities. The results suggest that a majority of participants found the assistant useful and often took actions based on the answers they received. In particular, the study indicates that prompting successfully improved the effectiveness of answers and, in particular, the likelihood that users follow their recommendations (fraction of participants who actually followed the advice was 0.514 with prompting vs. 0.402 without prompting, p=4.61E-04), an impact on people’s actual behavior. We provide a detailed analysis of data collected in this study, discuss their implications, and outline next steps in the development and deployment of effective cybersecurity QA assistants that offer the promise of changing actual user behavior and of reducing human-related security incidents.

View More Papers

PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented...

Ye Liu (Singapore Management University), Yue Xue (MetaTrust Labs), Daoyuan Wu (The Hong Kong University of Science and Technology), Yuqiang Sun (Nanyang Technological University), Yi Li (Nanyang Technological University), Miaolei Shi (MetaTrust Labs), Yang Liu (Nanyang Technological University)

Read More

Exploring Phishing Threats through QR Codes in Naturalistic Settings

Filipo Sharevski (DePaul University), Mattia Mossano, Maxime Fabian Veit, Gunther Schiefer, Melanie Volkamer (Karlsruhe Institute of Technology)

Read More

How Different Tokenization Algorithms Impact LLMs and Transformer Models...

Ahmed Mostafa, Raisul Arefin Nahid, Samuel Mulder (Auburn University)

Read More

Kronos: A Secure and Generic Sharding Blockchain Consensus with...

Yizhong Liu (Beihang University), Andi Liu (Beihang University), Yuan Lu (Institute of Software Chinese Academy of Sciences), Zhuocheng Pan (Beihang University), Yinuo Li (Xi’an Jiaotong University), Jianwei Liu (Beihang University), Song Bian (Beihang University), Mauro Conti (University of Padua)

Read More