Katherine S. Zhang (Purdue University), Claire Chen (Pennsylvania State University), Aiping Xiong (Pennsylvania State University)

Artificial intelligence (AI) systems in autonomous driving are vulnerable to a number of attacks, particularly the physical-world attacks that tamper with physical objects in the driving environment to cause AI errors. When AI systems fail or are about to fail, human drivers are required to take over vehicle control. To understand such human and AI collaboration, in this work, we examine 1) whether human drivers can detect these attacks, 2) how they project the consequent autonomous driving, 3) and what information they expect for safely taking over the vehicle control. We conducted an online survey on Prolific. Participants (N = 100) viewed benign and adversarial images of two physical-world attacks. We also presented videos of simulated driving for both attacks. Our results show that participants did not seem to be aware of the attacks. They overestimated the AI’s ability to detect the object in the dirty-road attack than in the stop-sign attack. Such overestimation was also evident when participants predicted AI’s ability in autonomous driving. We also found that participants expected different information (e.g., warnings and AI explanations) for safely taking over the control of autonomous driving.

View More Papers

POSE: Practical Off-chain Smart Contract Execution

Tommaso Frassetto (Technical University of Darmstadt), Patrick Jauernig (Technical University of Darmstadt), David Koisser (Technical University of Darmstadt), David Kretzler (Technical University of Darmstadt), Benjamin Schlosser (Technical University of Darmstadt), Sebastian Faust (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

“I didn't click”: What users say when reporting phishing

Nikolas Pilavakis, Adam Jenkins, Nadin Kokciyan, Kami Vaniea (University of Edinburgh)

Read More

A Security Study about Electron Applications and a Programming...

Zihao Jin (Microsoft Research and Tsinghua University), Shuo Chen (Microsoft Research), Yang Chen (Microsoft Research), Haixin Duan (Tsinghua University and Quancheng Laboratory), Jianjun Chen (Tsinghua University and Zhongguancun Laboratory), Jianping Wu (Tsinghua University)

Read More

Parakeet: Practical Key Transparency for End-to-End Encrypted Messaging

Harjasleen Malvai (UIUC/IC3), Lefteris Kokoris-Kogias (IST Austria), Alberto Sonnino (Mysten Labs), Esha Ghosh (Microsoft Research), Ercan Oztürk (Meta), Kevin Lewi (Meta), Sean Lawlor (Meta)

Read More