Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Traffic signs, essential for communicating critical rules to ensure safe and efficient traffic for entities such as pedestrians and motor vehicles, must be reliably recognized, especially in the realm of autonomous driving. However, recent studies have revealed vulnerabilities in vision-based traffic sign recognition systems to adversarial attacks, typically involving small stickers or laser projections. Our work advances this frontier by exploring a novel attack vector, the Adversarial Retroreflective Patch (ARP) attack. This method is stealthy and particularly effective at night by exploiting the optical properties of retroreflective materials, which reflect light back to its source. By applying retroreflective patches to traffic signs, the reflected light from the vehicle’s headlights interferes with the camera, causing perturbations that hinder the traffic sign recognition model’s ability to correctly detect the signs. In our preliminary study, we conducted a feasibility study of ARP attacks and observed that while a 100% attack success rate is achievable in digital simulations, it decreases to less than or equal to 90% in physical experiments. Finally, we discuss the current challenges and outline our future plans. This research gains significance in the context of autonomous vehicles’ 24/7 operation, emphasizing the critical need to assess sensor and AI vulnerabilities, especially in low-light nighttime environments, to ensure the continued safety and reliability of self-driving technologies.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 104 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 17392 ) )

Designing and Evaluating a Testbed for the Matter Protocol:...

Ravindra Mangar (Dartmouth College) Jingyu Qian (University of Illinois), Wondimu Zegeye (Morgan State University), Abdulrahman AlRabah, Ben Civjan, Shalni Sundram, Sam Yuan, Carl A. Gunter (University of Illinois), Mounib Khanafer (American University of Kuwait), Kevin Kornegay (Morgan State University), Timothy J. Pierson, David Kotz (Dartmouth College)

Read More

A Security and Usability Analysis of Local Attacks Against...

Tarun Kumar Yadav (Brigham Young University), Kent Seamons (Brigham Young University)

Read More

Free Proxies Unmasked: A Vulnerability and Longitudinal Analysis of...

Naif Mehanna (Univ. Lille / Inria / CNRS), Walter Rudametkin (IRISA / Univ Rennes), Pierre Laperdrix (CNRS, Univ Lille, Inria Lille), and Antoine Vastel (Datadome)

Read More

UntrustIDE: Exploiting Weaknesses in VS Code Extensions

Elizabeth Lin (North Carolina State University), Igibek Koishybayev (North Carolina State University), Trevor Dunlap (North Carolina State University), William Enck (North Carolina State University), Alexandros Kapravelos (North Carolina State University)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)