Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

CV-Inspector: Towards Automating Detection of Adblock Circumvention

Hieu Le (University of California, Irvine), Athina Markopoulou (University of California, Irvine), Zubair Shafiq (University of California, Davis)

Read More

Vehicle Lateral Motion Stability Under Wheel Lockup Attacks

Alireza Mohammadi (University of Michigan-Dearborn) and Hafiz Malik (University of Michigan-Dearborn)

Read More

DNS Privacy Vs : Confronting protocol design trade offs...

Mallory Knodel (Center for Democracy and Technology), Shivan Sahib (Salesforce)

Read More

Demo #12: Too Afraid to Drive: Systematic Discovery of...

Ziwen Wan (UC Irvine), Junjie Shen (UC Irvine), Jalen Chuang (UC Irvine), Xin Xia (UCLA), Joshua Garcia (UC Irvine), Jiaqi Ma (UCLA) and Qi Alfred Chen (UC Irvine)

Read More