Hengyi Liang, Ruochen Jiao (Northwestern University), Takami Sato, Junjie Shen, Qi Alfred Chen (UC Irvine), and Qi Zhu (Northwestern University)

Best Short Paper Award Winner!

Machine learning techniques, particularly those based on deep neural networks (DNNs), are widely adopted in the development of advanced driver-assistance systems (ADAS) and autonomous vehicles. While providing significant improvement over traditional methods in average performance, the usage of DNNs also presents great challenges to system safety, especially given the uncertainty of the surrounding environment, the disturbance to system operations, and the current lack of methodologies for predicting DNN behavior. In particular, adversarial attacks to the sensing input may cause errors in systems’ perception of the environment and lead to system failure. However, existing works mainly focus on analyzing the impact of such attacks on the sensing and perception results and designing mitigation strategies accordingly. We argue that as system safety is ultimately determined by the actions it takes, it is essential to take an end-to-end approach and address adversarial attacks with the consideration of the entire ADAS or autonomous driving pipeline, from sensing and perception to planing, navigation and control. In this paper, we present our initial findings in quantitatively analyzing the impact of a type of adversarial attack (that leverages road patch) on system planning and control, and discuss some of the possible directions to systematically address such attack with an end-to-end view.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 49 [1] => 47 ) ) ) [post__not_in] => Array ( [0] => 7244 ) )

Detecting Kernel Memory Leaks in Specialized Modules with Ownership...

Navid Emamdoost (University of Minnesota), Qiushi Wu (University of Minnesota), Kangjie Lu (University of Minnesota), Stephen McCamant (University of Minnesota)

Read More

Доверя́й, но проверя́й: SFI safety for native-compiled Wasm

Evan Johnson (University of California San Diego), David Thien (University of California San Diego), Yousef Alhessi (University of California San Diego), Shravan Narayan (University Of California San Diego), Fraser Brown (Stanford University), Sorin Lerner (University of California San Diego), Tyler McMullen (Fastly Labs), Stefan Savage (University of California San Diego), Deian Stefan (University of California…

Read More

Measuring DoT/DoH Blocking Using OONI Probe: a Preliminary Study

S. Basso (Open Observatory of Network Interference)

Read More

[WITHDRAWN] First, Do No Harm: Studying the manipulation of...

Shubham Agarwal (Saarland University), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)