Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Detection and Resolution of Control Decision Anomalies

Prof. Kang Shin (Kevin and Nancy O'Connor Professor of Computer Science, and the Founding Director of the Real-Time Computing Laboratory (RTCL) in the Electrical Engineering and Computer Science Department at the University of Michigan)

Read More

Guess Which Car Type I Am Driving: Information Leak...

Dongyao Chen (Shanghai Jiao Tong University), Mert D. Pesé (Clemson University), Kang G. Shin (University of Michigan, Ann Arbor)

Read More

Folk Models of Misinformation on Social Media

Filipo Sharevski (DePaul University), Amy Devine (DePaul University), Emma Pieroni (DePaul University), Peter Jachim (DePaul University)

Read More

AuthentiSense: A Scalable Behavioral Biometrics Authentication Scheme using Few-Shot...

Hossein Fereidooni (Technical University of Darmstadt), Jan Koenig (University of Wuerzburg), Phillip Rieger (Technical University of Darmstadt), Marco Chilese (Technical University of Darmstadt), Bora Goekbakan (KOBIL, Germany), Moritz Finke (University of Wuerzburg), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More