Cherin Lim, Tianhao Xu, Prashanth Rajivan (University of Washington)

Human trust is critical for the adoption and continued use of autonomous vehicles (AVs). Experiencing vehicle failures that stem from security threats to underlying technologies that enable autonomous driving, can significantly degrade drivers’ trust in AVs. It is crucial to understand and measure how security threats to AVs impact human trust. To this end, we conducted a driving simulator study with forty participants who underwent three drives including one that had simulated cybersecurity attacks. We hypothesize drivers’ trust in the vehicle is reflected through drivers’ body posture, foot movement, and engagement with vehicle controls during the drive. To test this hypothesis, we extracted body posture features from each frame in the video recordings, computed skeletal angles, and performed k-means clustering on these values to classify drivers’ foot positions. In this paper, we present an algorithmic pipeline for automatic analysis of body posture and objective measurement of trust that could be used for building AVs capable of trust calibration after security attack events.

View More Papers

Content Censorship in the InterPlanetary File System

Srivatsan Sridhar (Stanford University), Onur Ascigil (Lancaster University), Navin Keizer (University College London), François Genon (UCLouvain), Sébastien Pierre (UCLouvain), Yiannis Psaras (Protocol Labs), Etienne Riviere (UCLouvain), Michał Król (City, University of London)

Read More

On Precisely Detecting Censorship Circumvention in Real-World Networks

Ryan Wails (Georgetown University, U.S. Naval Research Laboratory), George Arnold Sullivan (University of California, San Diego), Micah Sherr (Georgetown University), Rob Jansen (U.S. Naval Research Laboratory)

Read More