Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

PASS: A System-Driven Evaluation Platform for Autonomous Driving Safety...

Zhisheng Hu (Baidu Security), Junjie Shen (UC Irvine), Shengjian Guo (Baidu Security), Xinyang Zhang (Baidu Security), Zhenyu Zhong (Baidu Security), Qi Alfred Chen (UC Irvine) and Kang Li (Baidu Security)

Read More

Demo #1: Security of Multi-Sensor Fusion based Perception in...

Yulong Cao (University of Michigan), Ningfei Wang (UC, Irvine), Chaowei Xiao (Arizona State University), Dawei Yang (University of Michigan), Jin Fang (Baidu Research), Ruigang Yang (University of Michigan), Qi Alfred Chen (UC, Irvine), Mingyan Liu (University of Michigan) and Bo Li (University of Illinois at Urbana-Champaign)

Read More

User Expectations and Understanding of Encrypted DNS Settings

Alexandra Nisenoff, Nick Feamster, Madeleine A Hoofnagle†, Sydney Zink. (University of Chicago and †Northwestern)

Read More

Demo #6: Attacks on CAN Error Handling Mechanism

Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)

Read More