Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #1: Curricular Reinforcement Learning for Robust Policy in...

Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Read More

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural...

Qiao Zhang (Old Dominion University), Chunsheng Xin (Old Dominion University), Hongyi Wu (Old Dominion University)

Read More

Demo #6: Attacks on CAN Error Handling Mechanism

Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)

Read More