Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

—Object detection is a crucial function that detects the position and type of objects from data acquired by sensors. In autonomous driving systems, object detection is performed using data from cameras and LiDAR, and based on the results, the vehicle is controlled to follow the safest route. However, machine learning-based object detection has been reported to have vulnerabilities to adversarial samples. In this study, we propose a new attack method called “Shadow Hack” for LiDAR object detection models. While previous attack methods mainly added perturbed point clouds to LiDAR data, in this research, we introduce a method to generate “Adversarial Shadows” on the LiDAR point cloud. Specifically, the attacker strategically places materials like aluminum leisure mats to reproduce optimized positions and shapes of shadows on the LiDAR point cloud. This technique can potentially mislead LiDAR-based object detection in autonomous vehicles, leading to congestion and accidents due to actions such as braking and avoidance maneuvers. We reproduce the Shadow Hack attack method using simulations and evaluate the success rate of the attack. Furthermore, by revealing the conditions under which the attack succeeds, we aim to propose countermeasures and contribute to enhancing the robustness of autonomous driving systems.

View More Papers

coucouArray ( [post_type] => ndss-paper [post_status] => publish [posts_per_page] => 4 [orderby] => rand [tax_query] => Array ( [0] => Array ( [taxonomy] => category [field] => id [terms] => Array ( [0] => 104 [1] => 68 ) ) ) [post__not_in] => Array ( [0] => 17393 ) )

K-LEAK: Towards Automating the Generation of Multi-Step Infoleak Exploits...

Zhengchuan Liang (UC Riverside), Xiaochen Zou (UC Riverside), Chengyu Song (UC Riverside), Zhiyun Qian (UC Riverside)

Read More

AAKA: An Anti-Tracking Cellular Authentication Scheme Leveraging Anonymous Credentials

Hexuan Yu (Virginia Polytechnic Institute and State University), Changlai Du (Virginia Polytechnic Institute and State University), Yang Xiao (University of Kentucky), Angelos Keromytis (Georgia Institute of Technology), Chonggang Wang (InterDigital), Robert Gazda (InterDigital), Y. Thomas Hou (Virginia Polytechnic Institute and State University), Wenjing Lou (Virginia Polytechnic Institute and State University)

Read More

AdvCAPTCHA: Creating Usable and Secure Audio CAPTCHA with Adversarial...

Hao-Ping (Hank) Lee (Carnegie Mellon University), Wei-Lun Kao (National Taiwan University), Hung-Jui Wang (National Taiwan University), Ruei-Che Chang (University of Michigan), Yi-Hao Peng (Carnegie Mellon University), Fu-Yin Cherng (National Chung Cheng University), Shang-Tse Chen (National Taiwan University)

Read More

DeepGo: Predictive Directed Greybox Fuzzing

Peihong Lin (National University of Defense Technology), Pengfei Wang (National University of Defense Technology), Xu Zhou (National University of Defense Technology), Wei Xie (National University of Defense Technology), Gen Zhang (National University of Defense Technology), Kai Lu (National University of Defense Technology)

Read More

Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice

Anxhela Maloku (Technical University of Munich), Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Florian Matthes (Technical University of Munich)

Vision: Profiling Human Attackers: Personality and Behavioral Patterns in Deceptive Multi-Stage CTF Challenges

Khalid Alasiri (School of Computing and Augmented Intelligence Arizona State University), Rakibul Hasan (School of Computing and Augmented Intelligence Arizona State University)

From Underground to Mainstream Marketplaces: Measuring AI-Enabled NSFW Deepfakes on Fiverr

Mohamed Moustafa Dawoud (University of California, Santa Cruz), Alejandro Cuevas (Princeton University), Ram Sundara Raman (University of California, Santa Cruz)