Lea Duesterwald (Carnegie Mellon University), Ian Yang (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

Human actions or lack thereof contribute to a large majority of cybersecurity incidents. Traditionally, when looking for advice on cybersecurity questions, people have turned to search engines or social sites like Reddit. The rapid adoption of chatbot technologies is offering a potentially more direct way of getting similar advice. Initial research suggests, however, that while chatbot answers to common cybersecurity questions tend to be fairly accurate, they may not be very effective as they often fall short on other desired qualities such as understandability, actionability, or motivational power. Research in this area thus far has been limited to the evaluation by researchers themselves on a small number of synthetic questions. This article reports on what we believe to be the first in situ evaluation of a cybersecurity Question Answering (QA) assistant. We also evaluate a prompt engineered to help the cybersecurity QA assistant generate more effective answers. The study involved a 10-day deployment of a cybersecurity QA assistant in the form of a Chrome extension. Collectively, participants (N=51) evaluated answers generated by the assistant to over 1,000 cybersecurity questions they submitted as part of their regular day-to-day activities. The results suggest that a majority of participants found the assistant useful and often took actions based on the answers they received. In particular, the study indicates that prompting successfully improved the effectiveness of answers and, in particular, the likelihood that users follow their recommendations (fraction of participants who actually followed the advice was 0.514 with prompting vs. 0.402 without prompting, p=4.61E-04), an impact on people’s actual behavior. We provide a detailed analysis of data collected in this study, discuss their implications, and outline next steps in the development and deployment of effective cybersecurity QA assistants that offer the promise of changing actual user behavior and of reducing human-related security incidents.

View More Papers

Understanding Miniapp Malware: Identification, Dissection, and Characterization

Yuqing Yang (The Ohio State University), Yue Zhang (Drexel University), Zhiqiang Lin (The Ohio State University)

Read More

Automated Expansion of Privacy Data Taxonomy for Compliant Data...

Yue Qin (Indiana University Bloomington & Central University of Finance and Economics), Yue Xiao (Indiana University Bloomington & IBM Research), Xiaojing Liao (Indiana University Bloomington)

Read More

Augmented Reality’s Potential for Identifying and Mitigating Home Privacy...

Stefany Cruz (Northwestern University), Logan Danek (Northwestern University), Shinan Liu (University of Chicago), Christopher Kraemer (Georgia Institute of Technology), Zixin Wang (Zhejiang University), Nick Feamster (University of Chicago), Danny Yuxing Huang (New York University), Yaxing Yao (University of Maryland), Josiah Hester (Georgia Institute of Technology)

Read More

A Comparison of Three Approaches to Assist Users in...

Michael Clark (Brigham Young University), Scott Ruoti (The University of Tennessee), Michael Mendoza (Imperial College London), Kent Seamons (Brigham Young University)

Read More