Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Practical Blind Membership Inference Attack via Differential Comparisons

Bo Hui (The Johns Hopkins University), Yuchen Yang (The Johns Hopkins University), Haolin Yuan (The Johns Hopkins University), Philippe Burlina (The Johns Hopkins University Applied Physics Laboratory), Neil Zhenqiang Gong (Duke University), Yinzhi Cao (The Johns Hopkins University)

Read More

PASS: A System-Driven Evaluation Platform for Autonomous Driving Safety...

Zhisheng Hu (Baidu Security), Junjie Shen (UC Irvine), Shengjian Guo (Baidu Security), Xinyang Zhang (Baidu Security), Zhenyu Zhong (Baidu Security), Qi Alfred Chen (UC Irvine) and Kang Li (Baidu Security)

Read More

OblivSketch: Oblivious Network Measurement as a Cloud Service

Shangqi Lai (Monash University), Xingliang Yuan (Monash University), Joseph K. Liu (Monash University), Xun Yi (RMIT University), Qi Li (Tsinghua University), Dongxi Liu (Data61, CSIRO), Surya Nepal (Data61, CSIRO)

Read More

Vision-Based Two-Factor Authentication & Localization Scheme for Autonomous Vehicles

Anas Alsoliman, Marco Levorato, and Qi Alfred Chen (UC Irvine)

Read More