Ziwen Wan (University of California, Irvine), Junjie Shen (University of California, Irvine), Jalen Chuang (University of California, Irvine), Xin Xia (The University of California, Los Angeles), Joshua Garcia (University of California, Irvine), Jiaqi Ma (The University of California, Los Angeles), Qi Alfred Chen (University of California, Irvine)

In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly security-critical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes.

To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problem-specific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previously-unknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.

View More Papers

Interpretable Federated Transformer Log Learning for Cloud Threat Forensics

Gonzalo De La Torre Parra (University of the Incarnate Word, TX, USA), Luis Selvera (Secure AI and Autonomy Lab, The University of Texas at San Antonio, TX, USA), Joseph Khoury (The Cyber Center For Security and Analytics, University of Texas at San Antonio, TX, USA), Hector Irizarry (Raytheon, USA), Elias Bou-Harb (The Cyber Center For…

Read More

Demo #1: Security of Multi-Sensor Fusion based Perception in...

Yulong Cao (University of Michigan), Ningfei Wang (UC, Irvine), Chaowei Xiao (Arizona State University), Dawei Yang (University of Michigan), Jin Fang (Baidu Research), Ruigang Yang (University of Michigan), Qi Alfred Chen (UC, Irvine), Mingyan Liu (University of Michigan) and Bo Li (University of Illinois at Urbana-Champaign)

Read More

An In-depth Analysis of Duplicated Linux Kernel Bug Reports

Dongliang Mu (Huazhong University of Science and Technology), Yuhang Wu (Pennsylvania State University), Yueqi Chen (Pennsylvania State University), Zhenpeng Lin (Pennsylvania State University), Chensheng Yu (George Washington University), Xinyu Xing (Pennsylvania State University), Gang Wang (University of Illinois at Urbana-Champaign)

Read More