Lea Duesterwald (Carnegie Mellon University), Ian Yang (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

Human actions or lack thereof contribute to a large majority of cybersecurity incidents. Traditionally, when looking for advice on cybersecurity questions, people have turned to search engines or social sites like Reddit. The rapid adoption of chatbot technologies is offering a potentially more direct way of getting similar advice. Initial research suggests, however, that while chatbot answers to common cybersecurity questions tend to be fairly accurate, they may not be very effective as they often fall short on other desired qualities such as understandability, actionability, or motivational power. Research in this area thus far has been limited to the evaluation by researchers themselves on a small number of synthetic questions. This article reports on what we believe to be the first in situ evaluation of a cybersecurity Question Answering (QA) assistant. We also evaluate a prompt engineered to help the cybersecurity QA assistant generate more effective answers. The study involved a 10-day deployment of a cybersecurity QA assistant in the form of a Chrome extension. Collectively, participants (N=51) evaluated answers generated by the assistant to over 1,000 cybersecurity questions they submitted as part of their regular day-to-day activities. The results suggest that a majority of participants found the assistant useful and often took actions based on the answers they received. In particular, the study indicates that prompting successfully improved the effectiveness of answers and, in particular, the likelihood that users follow their recommendations (fraction of participants who actually followed the advice was 0.514 with prompting vs. 0.402 without prompting, p=4.61E-04), an impact on people’s actual behavior. We provide a detailed analysis of data collected in this study, discuss their implications, and outline next steps in the development and deployment of effective cybersecurity QA assistants that offer the promise of changing actual user behavior and of reducing human-related security incidents.

View More Papers

SketchFeature: High-Quality Per-Flow Feature Extractor Towards Security-Aware Data Plane

Sian Kim (Ewha Womans University), Seyed Mohammad Mehdi Mirnajafizadeh (Wayne State University), Bara Kim (Korea University), Rhongho Jang (Wayne State University), DaeHun Nyang (Ewha Womans University)

Read More

Towards Anonymous Chatbots with (Un)Trustworthy Browser Proxies

Dzung Pham, Jade Sheffey, Chau Minh Pham, and Amir Houmansadr (University of Massachusetts Amherst)

Read More

ReThink: Reveal the Threat of Electromagnetic Interference on Power...

Fengchen Yang (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Zihao Dan (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Kaikai Pan (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Chen Yan (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Xiaoyu Ji (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Wenyuan Xu (Zhejiang University; ZJU…

Read More

Sheep's Clothing, Wolf's Data: Detecting Server-Induced Client Vulnerabilities in...

Fangming Gu (Institute of Information Engineering, Chinese Academy of Sciences), Qingli Guo (Institute of Information Engineering, Chinese Academy of Sciences), Jie Lu (Institute of Computing Technology, Chinese Academy of Sciences), Qinghe Xie (Institute of Information Engineering, Chinese Academy of Sciences), Beibei Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Kangjie Lu (University of Minnesota),…

Read More