Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 49 [1] => 47 ) ) ) [post__not_in] => Array ( [0] => 7246 ) )

DRIVETRUTH: Automated Autonomous Driving Dataset Generation for Security Applications

Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

Read More

(Short) Fooling Perception via Location: A Case of Region-of-Interest...

Kanglan Tang, Junjie Shen, and Qi Alfred Chen (UC Irvine)

Read More

FlowLens: Enabling Efficient Flow Classification for ML-based Network Security...

Diogo Barradas (INESC-ID, Instituto Superior Técnico, Universidade de Lisboa), Nuno Santos (INESC-ID, Instituto Superior Técnico, Universidade de Lisboa), Luis Rodrigues (INESC-ID, Instituto Superior Técnico, Universidade de Lisboa), Salvatore Signorello (LASIGE, Faculdade de Ciências, Universidade de Lisboa), Fernando M. V. Ramos (INESC-ID, Instituto Superior Técnico, Universidade de Lisboa), André Madeira (INESC-ID, Instituto Superior Técnico, Universidade de…

Read More

Emilia: Catching Iago in Legacy Code

Rongzhen Cui (University of Toronto), Lianying Zhao (Carleton University), David Lie (University of Toronto)

Read More