Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Towards Privacy-Preserving Platooning Services by means of Homomorphic Encryption

Nicolas Quero (Expleo France), Aymen Boudguiga (CEA LIST), Renaud Sirdey (CEA LIST), Nadir Karam (Expleo France)

Read More

CAN-MIRGU: A Comprehensive CAN Bus Attack Dataset from Moving...

Sampath Rajapaksha, Harsha Kalutarage (Robert Gordon University, UK), Garikayi Madzudzo (Horiba Mira Ltd, UK), Andrei Petrovski (Robert Gordon University, UK), M.Omar Al-Kadri (University of Doha for Science and Technology)

Read More

Detecting Unknown Encrypted Malicious Traffic in Real Time via...

Chuanpu Fu (Tsinghua University), Qi Li (Tsinghua University), Ke Xu (Tsinghua University)

Read More

On the Feasibility of Profiling Electric Vehicles through Charging...

Ankit Gangwal (IIIT Hyderabad), Aakash Jain (IIIT Hyderabad) and Mauro Conti (University of Padua)

Read More