Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #7: Automated Tracking System For LiDAR Spoofing Attacks...

Yulong Cao, Jiaxiang Ma, Kevin Fu (University of Michigan), Sara Rampazzi (University of Florida), and Z. Morley Mao (University of Michigan) Best Demo Award Runner-up ($200 cash prize)!

Read More

Censored Planet: An Internet-wide, Longitudinal Censorship Observatory

R. Sundara Raman, P. Shenoy, K. Kohls, and R. Ensafi (University of Michigan)

Read More

Dinosaur Resurrection: PowerPC Binary Patching for Base Station Analysis

Uwe Muller, Eicke Hauck, Timm Welz, Jiska Classen, Matthias Hollick (Secure Mobile Networking Lab, TU Darmstadt)

Read More

SODA: A Generic Online Detection Framework for Smart Contracts

Ting Chen (University of Electronic Science and Technology of China), Rong Cao (University of Electronic Science and Technology of China), Ting Li (University of Electronic Science and Technology of China), Xiapu Luo (The Hong Kong Polytechnic University), Guofei Gu (Texas A&M University), Yufei Zhang (University of Electronic Science and Technology of China), Zhou Liao (University…

Read More