Jinghan Yang, Andew Estornell, Yevgeniy Vorobeychik (Washington University in St. Louis)

A common vision for large-scale autonomous vehicle deployment is in a ride-hailing context. While this promises tremendous societal benefits, large-scale deployment can also exacerbate the impact of potential vulnerabilities of autonomous vehicle technologies. One particularly concerning vulnerability demonstrated in recent security research involves GPS spoofing, whereby a malicious party can introduce significant error into the perceived location of the vehicle. However, such attack focus on a single target vehicle. Our goal is to understand the systemic impact of a limited number of carefully placed spoofing devices on the quality of the ride hailing service that employs a large number of autonomous vehicles. We consider two variants of this problem: 1) a static variant, in which the spoofing device locations and their configuration are fixed, and 2) a dynamic variant, where both the spoofing devices and their configuration can change over time. In addition, we consider two possible attack objectives: 1) to maximize overall travel delay, and 2) to minimize the number of successfully completed requests (dropping off passengers at the wrong destinations). First, we show that the problem is NP-hard even in the static case. Next, we present an integer linear programming approach for solving the static variant of the problem, as well as a novel deep reinforcement learning approach for the dynamic variant. Our experiments on a real traffic network demonstrate that the proposed attacks on autonomous fleets are highly successful, and even a few spoofing devices can significantly degrade the efficacy of an autonomous ride-hailing fleet.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 66 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 13428 ) )

WIP: Savvy: Trustworthy Autonomous Vehicles Architecture

Ali Shoker, Rehana Yasmin, Paulo Esteves-Verissimo (Resilient Computing & Cybersecurity Center (RC3), KAUST)

Read More

Sometimes, You Aren’t What You Do: Mimicry Attacks against...

Akul Goyal (University of Illinois at Urbana-Champaign), Xueyuan Han (Wake Forest University), Gang Wang (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign)

Read More

Lightning Community Shout-Outs to:

(1) Jonathan Petit, Secure ML Performance Benchmark (Qualcomm) (2) David Balenson, The Road to Future Automotive Research Datasets: PIVOT Project and Community Workshop (USC Information Sciences Institute) (3) Jeremy Daily, CyberX Challenge Events (Colorado State University) (4) Mert D. Pesé, DETROIT: Data Collection, Translation and Sharing for Rapid Vehicular App Development (Clemson University) (5) Ning…

Read More

Efficient Privacy-Preserved Processing of Multimodal Data for Vehicular Traffic...

Meisam Mohammady (Iowa State University), Reza Arablouei (Data61, CSIRO)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)