Laura Matzen, Michelle A Leger, Geoffrey Reedy (Sandia National Laboratories)

Binary reverse engineers combine automated and manual techniques to answer questions about software. However, when evaluating automated analysis results, they rarely have additional information to help them contextualize these results in the binary. We expect that humans could more readily understand the binary program and these analysis results if they had access to information usually kept internal to the analysis, like value-set analysis (VSA) information. However, these automated analyses often give up precision for scalability, and imprecise information might hinder human decision making.

To assess how precision of VSA information affects human analysts, we designed a human study in which reverse engineers answered short information flow problems, determining whether code snippets would print sensitive information. We hypothesized that precise VSA information would help our participants analyze code faster and more accurately, and that imprecise VSA information would lead to slower, less accurate performance than no VSA information. We presented hand-crafted code snippets with precise, imprecise, or no VSA information in a blocked design, recording participants’ eye movements, response times, and accuracy while they analyzed the snippets. Our data showed that precise VSA information changed participants’ problem-solving strategies and supported faster, more accurate analyses. However, surprisingly, imprecise VSA information also led to increased accuracy relative to no VSA information, likely due to the extra time participants spent working through the code.

View More Papers

RCABench: Open Benchmarking Platform for Root Cause Analysis

Keisuke Nishimura, Yuichi Sugiyama, Yuki Koike, Masaya Motoda, Tomoya Kitagawa, Toshiki Takatera, Yuma Kurogome (Ricerca Security, Inc.)

Read More

WATSON: Abstracting Behaviors from Audit Logs via Aggregation of...

Jun Zeng (National University of Singapore), Zheng Leong Chua (Independent Researcher), Yinfang Chen (National University of Singapore), Kaihang Ji (National University of Singapore), Zhenkai Liang (National University of Singapore), Jian Mao (Beihang University)

Read More

[WITHDRAWN] First, Do No Harm: Studying the manipulation of...

Shubham Agarwal (Saarland University), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

POSEIDON: Privacy-Preserving Federated Neural Network Learning

Sinem Sav (EPFL), Apostolos Pyrgelis (EPFL), Juan Ramón Troncoso-Pastoriza (EPFL), David Froelicher (EPFL), Jean-Philippe Bossuat (EPFL), Joao Sa Sousa (EPFL), Jean-Pierre Hubaux (EPFL)

Read More