Katherine S. Zhang (Purdue University), Claire Chen (Pennsylvania State University), Aiping Xiong (Pennsylvania State University)

Artificial intelligence (AI) systems in autonomous driving are vulnerable to a number of attacks, particularly the physical-world attacks that tamper with physical objects in the driving environment to cause AI errors. When AI systems fail or are about to fail, human drivers are required to take over vehicle control. To understand such human and AI collaboration, in this work, we examine 1) whether human drivers can detect these attacks, 2) how they project the consequent autonomous driving, 3) and what information they expect for safely taking over the vehicle control. We conducted an online survey on Prolific. Participants (N = 100) viewed benign and adversarial images of two physical-world attacks. We also presented videos of simulated driving for both attacks. Our results show that participants did not seem to be aware of the attacks. They overestimated the AI’s ability to detect the object in the dirty-road attack than in the stop-sign attack. Such overestimation was also evident when participants predicted AI’s ability in autonomous driving. We also found that participants expected different information (e.g., warnings and AI explanations) for safely taking over the control of autonomous driving.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 66 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 13431 ) )

An OS-agnostic Approach to Memory Forensics

Andrea Oliveri (EURECOM), Matteo Dell'Amico (University of Genoa), Davide Balzarotti (EURECOM)

Read More

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better...

Siyuan Cheng (Purdue University), Guanhong Tao (Purdue University), Yingqi Liu (Purdue University), Shengwei An (Purdue University), Xiangzhe Xu (Purdue University), Shiwei Feng (Purdue University), Guangyu Shen (Purdue University), Kaiyuan Zhang (Purdue University), Qiuling Xu (Purdue University), Shiqing Ma (Rutgers University), Xiangyu Zhang (Purdue University)

Read More

MetaWave: Attacking mmWave Sensing with Meta-material-enhanced Tags

Xingyu Chen (University of Colorado Denver), Zhengxiong Li (University of Colorado Denver), Baicheng Chen (University of California San Diego), Yi Zhu (SUNY at Buffalo), Chris Xiaoxuan Lu (University of Edinburgh), Zhengyu Peng (Aptiv), Feng Lin (Zhejiang University), Wenyao Xu (SUNY Buffalo), Kui Ren (Zhejiang University), Chunming Qiao (SUNY at Buffalo)

Read More

Augmented Reality’s Potential for Identifying and Mitigating Home Privacy...

Stefany Cruz (Northwestern University), Logan Danek (Northwestern University), Shinan Liu (University of Chicago), Christopher Kraemer (Georgia Institute of Technology), Zixin Wang (Zhejiang University), Nick Feamster (University of Chicago), Danny Yuxing Huang (New York University), Yaxing Yao (University of Maryland), Josiah Hester (Georgia Institute of Technology)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)