Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)

MiTechnological advances relating to artificial intelligence (AI) and explainable AI (xAI) techniques are at a stage of development that requires better understanding of operational context. AI tools are primarily viewed as black boxes and some hesitation exists in employing them due to lack of trust and transparency. xAI technologies largely aim to overcome these issues to improve operational efficiency and effectiveness of operators, speeding up the process and allowing for more consistent and informed decision making from AI outputs. Such efforts require not only robust and reliable models but also relevant and understandable explanations to end users to successfully assist in achieving user goals, reducing bias, and improving trust in AI models. Cybersecurity operations settings represent one such context in which automation is vital for maintaining cyber defenses. AI models and xAI techniques were developed to aid analysts in identifying events and making decisions about flagged events (e.g. network attack). We instrumented the tools used for cybersecurity operations to unobtrusively collect data and evaluate the effectiveness of xAI tools. During a pilot study for deployment, we found that xAI tools, while intended to increase trust and improve efficiency, were not utilized heavily, nor did they improve analyst decision accuracy. Critical lessons were learned that impact the utility and adoptability of the technology, including consideration of end users, their workflows, their environments, and their propensity to trust xAI outputs.

View More Papers

ditto: WAN Traffic Obfuscation at Line Rate

Roland Meier (ETH Zürich), Vincent Lenders (armasuisse), Laurent Vanbever (ETH Zürich)

Read More

“Do We Call Them That? Absolutely Not.”: Juxtaposing the...

Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Luca Favaro (Technical University of Munich), and Florian Matthes (Technical University of Munich)

Read More

Probe the Proto: Measuring Client-Side Prototype Pollution Vulnerabilities of...

Zifeng Kang (Johns Hopkins University), Song Li (Johns Hopkins University), Yinzhi Cao (Johns Hopkins University)

Read More