Cherin Lim, Tianhao Xu, Prashanth Rajivan (University of Washington)

Human trust is critical for the adoption and continued use of autonomous vehicles (AVs). Experiencing vehicle failures that stem from security threats to underlying technologies that enable autonomous driving, can significantly degrade drivers’ trust in AVs. It is crucial to understand and measure how security threats to AVs impact human trust. To this end, we conducted a driving simulator study with forty participants who underwent three drives including one that had simulated cybersecurity attacks. We hypothesize drivers’ trust in the vehicle is reflected through drivers’ body posture, foot movement, and engagement with vehicle controls during the drive. To test this hypothesis, we extracted body posture features from each frame in the video recordings, computed skeletal angles, and performed k-means clustering on these values to classify drivers’ foot positions. In this paper, we present an algorithmic pipeline for automatic analysis of body posture and objective measurement of trust that could be used for building AVs capable of trust calibration after security attack events.

View More Papers

TrustSketch: Trustworthy Sketch-based Telemetry on Cloud Hosts

Zhuo Cheng (Carnegie Mellon University), Maria Apostolaki (Princeton University), Zaoxing Liu (University of Maryland), Vyas Sekar (Carnegie Mellon University)

Read More

Differentially Private Dataset Condensation

Tianhang Zheng (University of Missouri-Kansas City), Baochun Li (University of Toronto)

Read More

BGP-iSec: Improved Security of Internet Routing Against Post-ROV Attacks

Cameron Morris (University of Connecticut), Amir Herzberg (University of Connecticut), Bing Wang (University of Connecticut), Samuel Secondo (University of Connecticut)

Read More

LibAFL QEMU: A Library for Fuzzing-oriented Emulation

Romain Malmain (EURECOM), Andrea Fioraldi (EURECOM), Aurelien Francillon (EURECOM)

Read More