Cherin Lim, Tianhao Xu, Prashanth Rajivan (University of Washington)

Human trust is critical for the adoption and continued use of autonomous vehicles (AVs). Experiencing vehicle failures that stem from security threats to underlying technologies that enable autonomous driving, can significantly degrade drivers’ trust in AVs. It is crucial to understand and measure how security threats to AVs impact human trust. To this end, we conducted a driving simulator study with forty participants who underwent three drives including one that had simulated cybersecurity attacks. We hypothesize drivers’ trust in the vehicle is reflected through drivers’ body posture, foot movement, and engagement with vehicle controls during the drive. To test this hypothesis, we extracted body posture features from each frame in the video recordings, computed skeletal angles, and performed k-means clustering on these values to classify drivers’ foot positions. In this paper, we present an algorithmic pipeline for automatic analysis of body posture and objective measurement of trust that could be used for building AVs capable of trust calibration after security attack events.

View More Papers

Lightning Community Shout-Outs to:

(1) Jonathan Petit, Secure ML Performance Benchmark (Qualcomm) (2) David Balenson, The Road to Future Automotive Research Datasets: PIVOT Project and Community Workshop (USC Information Sciences Institute) (3) Jeremy Daily, CyberX Challenge Events (Colorado State University) (4) Mert D. Pesé, DETROIT: Data Collection, Translation and Sharing for Rapid Vehicular App Development (Clemson University) (5) Ning…

Read More

TinyML meets IoBT against Sensor Hacking

Raushan Kumar Singh (IIT Ropar), Sudeepta Mishra (IIT Ropar)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More