Ziwen Wan (University of California, Irvine), Junjie Shen (University of California, Irvine), Jalen Chuang (University of California, Irvine), Xin Xia (The University of California, Los Angeles), Joshua Garcia (University of California, Irvine), Jiaqi Ma (The University of California, Los Angeles), Qi Alfred Chen (University of California, Irvine)

In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly security-critical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes.

To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problem-specific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previously-unknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.

View More Papers

HARPO: Learning to Subvert Online Behavioral Advertising

Jiang Zhang (University of Southern California), Konstantinos Psounis (University of Southern California), Muhammad Haroon (University of California, Davis), Zubair Shafiq (University of California, Davis)

Read More

MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity

Shengwei An (Purdue University), Guanhong Tao (Purdue University), Qiuling Xu (Purdue University), Yingqi Liu (Purdue University), Guangyu Shen (Purdue University); Yuan Yao (Nanjing University), Jingwei Xu (Nanjing University), Xiangyu Zhang (Purdue University)

Read More

Context-Sensitive and Directional Concurrency Fuzzing for Data-Race Detection

Zu-Ming Jiang (Tsinghua University), Jia-Ju Bai (Tsinghua University), Kangjie Lu (University of Minnesota), Shi-Min Hu (Tsinghua University)

Read More

Hazard Integrated: Understanding Security Risks in App Extensions to...

Mingming Zha (Indiana University Bloomington), Jice Wang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Yuhong Nan (Sun Yat-sen University), Xiaofeng Wang (Indiana Unversity Bloomington), Yuqing Zhang (National Computer Network Intrusion Protection Center, University of Chinese Academy of Sciences), Zelin Yang (National Computer Network Intrusion Protection Center, University of Chinese Academy…

Read More