Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Him of Many Faces: Characterizing Billion-scale Adversarial and Benign...

Shujiang Wu (Johns Hopkins University), Pengfei Sun (F5, Inc.), Yao Zhao (F5, Inc.), Yinzhi Cao (Johns Hopkins University)

Read More

REDsec: Running Encrypted Discretized Neural Networks in Seconds

Lars Wolfgang Folkerts (University of Delaware), Charles Gouert (University of Delaware), Nektarios Georgios Tsoutsos (University of Delaware)

Read More

Secure Control of Connected and Automated Vehicles Using Trust-Aware...

H M Sabbir Ahmad, Ehsan Sabouni, Akua Dickson (Boston University), Wei Xiao (Massachusetts Institute of Technology), Christos Cassandras, Wenchao Li (Boston University)

Read More

Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks...

Hadi Abdullah (Visa Research), Aditya Karlekar (University of Florida), Saurabh Prasad (University of Florida), Muhammad Sajidur Rahman (University of Florida), Logan Blue (University of Florida), Luke A. Bauer (University of Florida), Vincent Bindschaedler (University of Florida), Patrick Traynor (University of Florida)

Read More