Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

LaKSA: A Probabilistic Proof-of-Stake Protocol

Daniel Reijsbergen (Singapore University of Technology and Design), Pawel Szalachowski (Singapore University of Technology and Design), Junming Ke (University of Tartu), Zengpeng Li (Singapore University of Technology and Design), Jianying Zhou (Singapore University of Technology and Design)

Read More

The Nuts and Bolts of Building FlowLens

Diogo Barradas (Instituto Superior Técnico, Universidade de Lisboa)

Read More

MUVIDS: False MAVLink Injection Attack Detection in Communication for...

Seonghoon Jeong, Eunji Park, Kang Uk Seo, Jeong Do Yoo, and Huy Kang Kim (Korea University)

Read More

Generation of CAN-based Wheel Lockup Attacks on the Dynamics...

Alireza Mohammadi (University of Michigan-Dearborn), Hafiz Malik (University of Michigan-Dearborn) and Masoud Abbaszadeh (GE Global Research)

Read More