Takami Sato (UC Irvine) and Qi Alfred Chen (UC Irvine)

Deep Neural Network (DNN)-based lane detection is widely utilized in autonomous driving technologies. At the same time, recent studies demonstrate that adversarial attacks on lane detection can cause serious consequences on particular production-grade autonomous driving systems. However, the generality of the attacks, especially their effectiveness against other state-of-the-art lane detection approaches, has not been well studied. In this work, we report our progress on conducting the first large-scale empirical study to evaluate the robustness of 4 major types of lane detection methods under 3 types of physical-world adversarial attacks in end-to-end driving scenarios. We find that each lane detection method has different security characteristics, and in particular, some models are highly vulnerable to certain types of attack. Surprisingly, but probably not coincidentally, popular production lane centering systems properly select the lane detection approach which shows higher resistance to such attacks. In the near future, more and more automakers will include autonomous driving features in their products. We hope that our research will help as many automakers as possible to recognize the risks in choosing lane detection algorithms.

View More Papers

Generation of CAN-based Wheel Lockup Attacks on the Dynamics...

Alireza Mohammadi (University of Michigan-Dearborn), Hafiz Malik (University of Michigan-Dearborn) and Masoud Abbaszadeh (GE Global Research)

Read More

30 Years into Scientific Binary Decompilation: What We Have...

Dr. Ruoyu (Fish) Wang, Assistant Professor at Arizona State University

Read More

The Inconvenient Truths of Ground Truth for Binary Analysis

Jim Alves-Foss, Varsha Venugopal (University of Idaho)

Read More

GhostTalk: Interactive Attack on Smartphone Voice System Through Power...

Yuanda Wang (Michigan State University), Hanqing Guo (Michigan State University), Qiben Yan (Michigan State University)

Read More