Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

WIP: Infrared Laser Reflection Attack Against Traffic Sign Recognition...

Takami Sato (University of California, Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

GPS Spoofing Attack Detection on Intersection Movement Assist using...

Jun Ying (Purdue University), Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan)

Read More

Guess Which Car Type I Am Driving: Information Leak...

Dongyao Chen (Shanghai Jiao Tong University), Mert D. Pesé (Clemson University), Kang G. Shin (University of Michigan, Ann Arbor)

Read More

Securing Lidar Communication through Watermark-based Tampering Detection (Long)

Michele Marazzi, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

Read More