Cherin Lim, Tianhao Xu, Prashanth Rajivan (University of Washington)

Human trust is critical for the adoption and continued use of autonomous vehicles (AVs). Experiencing vehicle failures that stem from security threats to underlying technologies that enable autonomous driving, can significantly degrade drivers’ trust in AVs. It is crucial to understand and measure how security threats to AVs impact human trust. To this end, we conducted a driving simulator study with forty participants who underwent three drives including one that had simulated cybersecurity attacks. We hypothesize drivers’ trust in the vehicle is reflected through drivers’ body posture, foot movement, and engagement with vehicle controls during the drive. To test this hypothesis, we extracted body posture features from each frame in the video recordings, computed skeletal angles, and performed k-means clustering on these values to classify drivers’ foot positions. In this paper, we present an algorithmic pipeline for automatic analysis of body posture and objective measurement of trust that could be used for building AVs capable of trust calibration after security attack events.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 104 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 17405 ) )

DeGPT: Optimizing Decompiler Output with LLM

Peiwei Hu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Ruigang Liang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, China)

Read More

DeepGo: Predictive Directed Greybox Fuzzing

Peihong Lin (National University of Defense Technology), Pengfei Wang (National University of Defense Technology), Xu Zhou (National University of Defense Technology), Wei Xie (National University of Defense Technology), Gen Zhang (National University of Defense Technology), Kai Lu (National University of Defense Technology)

Read More

WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on...

Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

On Precisely Detecting Censorship Circumvention in Real-World Networks

Ryan Wails (Georgetown University, U.S. Naval Research Laboratory), George Arnold Sullivan (University of California, San Diego), Micah Sherr (Georgetown University), Rob Jansen (U.S. Naval Research Laboratory)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)