Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Do Privacy Labels Answer Users' Privacy Questions?

Shikun Zhang, Norman Sadeh (Carnegie Mellon University)

Read More

VulHawk: Cross-architecture Vulnerability Detection with Entropy-based Binary Code Search

Zhenhao Luo (College of Computer, National University of Defense Technology), Pengfei Wang (College of Computer, National University of Defense Technology), Baosheng Wang (College of Computer, National University of Defense Technology), Yong Tang (College of Computer, National University of Defense Technology), Wei Xie (College of Computer, National University of Defense Technology), Xu Zhou (College of Computer,…

Read More

How to Count Bots in Longitudinal Datasets of IP...

Leon Böck (Technische Universität Darmstadt), Dave Levin (University of Maryland), Ramakrishna Padmanabhan (CAIDA), Christian Doerr (Hasso Plattner Institute), Max Mühlhäuser (Technical University of Darmstadt)

Read More

Towards Privacy-Preserving Platooning Services by means of Homomorphic Encryption

Nicolas Quero (Expleo France), Aymen Boudguiga (CEA LIST), Renaud Sirdey (CEA LIST), Nadir Karam (Expleo France)

Read More