Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Connected, autonomous, semi-autonomous, and human-driven vehicles must accurately detect, and adhere, to traffic light signals to ensure safe and efficient traffic flow. Misinterpretation of traffic lights can result in potential safety issues. Recent work demonstrated attacks that projected structured light patterns onto vehicle cameras, causing traffic signal misinterpretation. In this work, we introduce a new physical attack method against traffic light recognition systems that exploits a vulnerability in the physical structure of traffic lights. We observe that when laser light is projected onto traffic lights, it is scattered by reflectors (mirrors) located inside the traffic lights. To a vehicle’s camera, the attacker-injected laser light appears to be a genuine light source, resulting in misclassifications by traffic light recognition models. We show that our methodology can induce misclassifications using both visible and invisible light when the traffic light is operational (on) and not operational (off). We present classification results for three state-of-the-art traffic light recognition models and show that this attack can cause misclassification of both red and green traffic light status. Tested on incandescent traffic lights, our attack can be deployed up to 25 meters from the target traffic light. It reaches an attack success rate of 100% in misclassifying green status, and 86% in misclassifying red status, in a controlled, dynamic scenario.

View More Papers

EyeSeeIdentity: Exploring Natural Gaze Behaviour for Implicit User Identification...

L Yasmeen Abdrabou (Lancaster University), Mariam Hassib (Fortiss Research Institute of the Free State of Bavaria), Shuqin Hu (LMU Munich), Ken Pfeuffer (Aarhus University), Mohamed Khamis (University of Glasgow), Andreas Bulling (University of Stuttgart), Florian Alt (University of the Bundeswehr Munich)

Read More

Large Language Model guided Protocol Fuzzing

Ruijie Meng (National University of Singapore, Singapore), Martin Mirchev (National University of Singapore), Marcel Böhme (MPI-SP, Germany and Monash University, Australia), Abhik Roychoudhury (National University of Singapore)

Read More

DRAINCLoG: Detecting Rogue Accounts with Illegally-obtained NFTs using Classifiers...

Hanna Kim (KAIST), Jian Cui (Indiana University Bloomington), Eugene Jang (S2W Inc.), Chanhee Lee (S2W Inc.), Yongjae Lee (S2W Inc.), Jin-Woo Chung (S2W Inc.), Seungwon Shin (KAIST)

Read More

Enhanced Vehicular Roll-Jam Attack using a Known Noise Source

Zachary Depp, Halit Bugra Tulay, C. Emre Koksal (The Ohio State University)

Read More