Jinghan Yang, Andew Estornell, Yevgeniy Vorobeychik (Washington University in St. Louis)

A common vision for large-scale autonomous vehicle deployment is in a ride-hailing context. While this promises tremendous societal benefits, large-scale deployment can also exacerbate the impact of potential vulnerabilities of autonomous vehicle technologies. One particularly concerning vulnerability demonstrated in recent security research involves GPS spoofing, whereby a malicious party can introduce significant error into the perceived location of the vehicle. However, such attack focus on a single target vehicle. Our goal is to understand the systemic impact of a limited number of carefully placed spoofing devices on the quality of the ride hailing service that employs a large number of autonomous vehicles. We consider two variants of this problem: 1) a static variant, in which the spoofing device locations and their configuration are fixed, and 2) a dynamic variant, where both the spoofing devices and their configuration can change over time. In addition, we consider two possible attack objectives: 1) to maximize overall travel delay, and 2) to minimize the number of successfully completed requests (dropping off passengers at the wrong destinations). First, we show that the problem is NP-hard even in the static case. Next, we present an integer linear programming approach for solving the static variant of the problem, as well as a novel deep reinforcement learning approach for the dynamic variant. Our experiments on a real traffic network demonstrate that the proposed attacks on autonomous fleets are highly successful, and even a few spoofing devices can significantly degrade the efficacy of an autonomous ride-hailing fleet.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 66 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 13428 ) )

Trellis: Robust and Scalable Metadata-private Anonymous Broadcast

Simon Langowski (Massachusetts Institute of Technology), Sacha Servan-Schreiber (Massachusetts Institute of Technology), Srinivas Devadas (Massachusetts Institute of Technology)

Read More

On the Vulnerability of Traffic Light Recognition Systems to...

Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Let Me Unwind That For You: Exceptions to Backward-Edge...

Victor Duta (Vrije Universiteit Amsterdam), Fabian Freyer (University of California San Diego), Fabio Pagani (University of California, Santa Barbara), Marius Muench (Vrije Universiteit Amsterdam), Cristiano Giuffrida (Vrije Universiteit Amsterdam)

Read More

Understanding the Internet-Wide Vulnerability Landscape for ROS-based Robotic Vehicles...

Wentao Chen, Sam Der, Yunpeng Luo, Fayzah Alshammari, Qi Alfred Chen (University of California, Irvine)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)