Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Towards Understanding and Detecting Cyberbullying in Real-world Images

Nishant Vishwamitra (University at Buffalo), Hongxin Hu (University at Buffalo), Feng Luo (Clemson University), Long Cheng (Clemson University)

Read More

Detecting Kernel Memory Leaks in Specialized Modules with Ownership...

Navid Emamdoost (University of Minnesota), Qiushi Wu (University of Minnesota), Kangjie Lu (University of Minnesota), Stephen McCamant (University of Minnesota)

Read More

Screen Gleaning: Receiving and Interpreting Pixels by Eavesdropping on...

Zhuoran Liu, Léo Weissbart, Dirk Lauret (Radboud University)

Read More