Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #7: Automated Tracking System For LiDAR Spoofing Attacks...

Yulong Cao, Jiaxiang Ma, Kevin Fu (University of Michigan), Sara Rampazzi (University of Florida), and Z. Morley Mao (University of Michigan) Best Demo Award Runner-up ($200 cash prize)!

Read More

Vehicle Lateral Motion Stability Under Wheel Lockup Attacks

Alireza Mohammadi (University of Michigan-Dearborn) and Hafiz Malik (University of Michigan-Dearborn)

Read More