Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Sometimes, You Aren’t What You Do: Mimicry Attacks against...

Akul Goyal (University of Illinois at Urbana-Champaign), Xueyuan Han (Wake Forest University), Gang Wang (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign)

Read More

Why do Internet Devices Remain Vulnerable? A Survey with...

Tamara Bondar, Hala Assal, AbdelRahman Abdou (Carleton University)

Read More

Semi-Automated Synthesis of Driving Rules

Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Read More

OCPPStorm: A Comprehensive Fuzzing Tool for OCPP Implementations (Long)

Gaetano Coppoletta (University of Illinois Chicago), Rigel Gjomemo (Discovery Partners Institute, University of Illinois), Amanjot Kaur, Nima Valizadeh (Cardiff University), Venkat Venkatakrishnan (Discovery Partners Institute, University of Illinois), Omer Rana (Cardiff University)

Read More