Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Connected, autonomous, semi-autonomous, and human-driven vehicles must accurately detect, and adhere, to traffic light signals to ensure safe and efficient traffic flow. Misinterpretation of traffic lights can result in potential safety issues. Recent work demonstrated attacks that projected structured light patterns onto vehicle cameras, causing traffic signal misinterpretation. In this work, we introduce a new physical attack method against traffic light recognition systems that exploits a vulnerability in the physical structure of traffic lights. We observe that when laser light is projected onto traffic lights, it is scattered by reflectors (mirrors) located inside the traffic lights. To a vehicle’s camera, the attacker-injected laser light appears to be a genuine light source, resulting in misclassifications by traffic light recognition models. We show that our methodology can induce misclassifications using both visible and invisible light when the traffic light is operational (on) and not operational (off). We present classification results for three state-of-the-art traffic light recognition models and show that this attack can cause misclassification of both red and green traffic light status. Tested on incandescent traffic lights, our attack can be deployed up to 25 meters from the target traffic light. It reaches an attack success rate of 100% in misclassifying green status, and 86% in misclassifying red status, in a controlled, dynamic scenario.

View More Papers

A Duty to Forget, a Right to be Assured?...

Hongsheng Hu (CSIRO's Data61), Shuo Wang (CSIRO's Data61), Jiamin Chang (University of New South Wales), Haonan Zhong (University of New South Wales), Ruoxi Sun (CSIRO's Data61), Shuang Hao (University of Texas at Dallas), Haojin Zhu (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61)

Read More

OCPPStorm: A Comprehensive Fuzzing Tool for OCPP Implementations (Long)

Gaetano Coppoletta (University of Illinois Chicago), Rigel Gjomemo (Discovery Partners Institute, University of Illinois), Amanjot Kaur, Nima Valizadeh (Cardiff University), Venkat Venkatakrishnan (Discovery Partners Institute, University of Illinois), Omer Rana (Cardiff University)

Read More

WIP: An Adaptive High Frequency Removal Attack to Bypass...

Yuki Hayakawa (Keio University), Takami Sato (University of California, Irvine), Ryo Suzuki, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models

Qi Pang (Carnegie Mellon University), Yuanyuan Yuan (HKUST), Shuai Wang (HKUST)

Read More