Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Blaze: A Framework for Interprocedural Binary Analysis

Matthew Revelle, Matt Parker, Kevin Orr (Kudu Dynamics)

Read More

Attacks as Defenses: Designing Robust Audio CAPTCHAs Using Attacks...

Hadi Abdullah (Visa Research), Aditya Karlekar (University of Florida), Saurabh Prasad (University of Florida), Muhammad Sajidur Rahman (University of Florida), Logan Blue (University of Florida), Luke A. Bauer (University of Florida), Vincent Bindschaedler (University of Florida), Patrick Traynor (University of Florida)

Read More

Partitioning Ethereum without Eclipsing It

Hwanjo Heo (ETRI), Seungwon Woo (ETRI/KAIST), Taeung Yoon (KAIST), Min Suk Kang (KAIST), Seungwon Shin (KAIST)

Read More

Efficient Dynamic Proof of Retrievability for Cold Storage

Tung Le (Virginia Tech), Pengzhi Huang (Cornell University), Attila A. Yavuz (University of South Florida), Elaine Shi (CMU), Thang Hoang (Virginia Tech)

Read More