Jinghan Yang, Andew Estornell, Yevgeniy Vorobeychik (Washington University in St. Louis)

A common vision for large-scale autonomous vehicle deployment is in a ride-hailing context. While this promises tremendous societal benefits, large-scale deployment can also exacerbate the impact of potential vulnerabilities of autonomous vehicle technologies. One particularly concerning vulnerability demonstrated in recent security research involves GPS spoofing, whereby a malicious party can introduce significant error into the perceived location of the vehicle. However, such attack focus on a single target vehicle. Our goal is to understand the systemic impact of a limited number of carefully placed spoofing devices on the quality of the ride hailing service that employs a large number of autonomous vehicles. We consider two variants of this problem: 1) a static variant, in which the spoofing device locations and their configuration are fixed, and 2) a dynamic variant, where both the spoofing devices and their configuration can change over time. In addition, we consider two possible attack objectives: 1) to maximize overall travel delay, and 2) to minimize the number of successfully completed requests (dropping off passengers at the wrong destinations). First, we show that the problem is NP-hard even in the static case. Next, we present an integer linear programming approach for solving the static variant of the problem, as well as a novel deep reinforcement learning approach for the dynamic variant. Our experiments on a real traffic network demonstrate that the proposed attacks on autonomous fleets are highly successful, and even a few spoofing devices can significantly degrade the efficacy of an autonomous ride-hailing fleet.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 66 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 13428 ) )

CAN-MIRGU: A Comprehensive CAN Bus Attack Dataset from Moving...

Sampath Rajapaksha, Harsha Kalutarage (Robert Gordon University, UK), Garikayi Madzudzo (Horiba Mira Ltd, UK), Andrei Petrovski (Robert Gordon University, UK), M.Omar Al-Kadri (University of Doha for Science and Technology)

Read More

Securing Lidar Communication through Watermark-based Tampering Detection (Long)

Michele Marazzi, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

Read More

OptRand: Optimistically Responsive Reconfigurable Distributed Randomness

Adithya Bhat (Purdue University), Nibesh Shrestha (Rochester Institute of Technology), Aniket Kate (Purdue University), Kartik Nayak (Duke University)

Read More

POSE: Practical Off-chain Smart Contract Execution

Tommaso Frassetto (Technical University of Darmstadt), Patrick Jauernig (Technical University of Darmstadt), David Koisser (Technical University of Darmstadt), David Kretzler (Technical University of Darmstadt), Benjamin Schlosser (Technical University of Darmstadt), Sebastian Faust (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)