Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

ROV++: Improved Deployable Defense against BGP Hijacking

Reynaldo Morillo (University of Connecticut), Justin Furuness (University of Connecticut), Cameron Morris (University of Connecticut), James Breslin (University of Connecticut), Amir Herzberg (University of Connecticut), Bing Wang (University of Connecticut)

Read More

IoTSafe: Enforcing Safety and Security Policy with Real IoT...

Wenbo Ding (Clemson University), Hongxin Hu (University at Buffalo), Long Cheng (Clemson University)

Read More

Shadow Attacks: Hiding and Replacing Content in Signed PDFs

Christian Mainka (Ruhr University Bochum), Vladislav Mladenov (Ruhr University Bochum), Simon Rohlmann (Ruhr University Bochum)

Read More

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural...

Qiao Zhang (Old Dominion University), Chunsheng Xin (Old Dominion University), Hongyi Wu (Old Dominion University)

Read More