Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

WIP: On Robustness of Lane Detection Models to Physical-World...

Takami Sato (UC Irvine) and Qi Alfred Chen (UC Irvine)

Read More

Model-Agnostic Defense for Lane Detection against Adversarial Attack

Henry Xu, An Ju, and David Wagner (UC Berkeley) Baidu Security Auto-Driving Security Award Winner ($1000 cash prize)!

Read More

Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses...

Virat Shejwalkar (UMass Amherst), Amir Houmansadr (UMass Amherst)

Read More