Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

On the Vulnerability of Traffic Light Recognition Systems to...

Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Exploiting Diagnostic Protocol Vulnerabilities on Embedded Networks in Commercial...

Carson Green, Rik Chatterjee, Jeremy Daily (Colorado State University)

Read More

WIP: Shadow Hack: Adversarial Shadow Attack Against LiDAR Object...

Ryunosuke Kobayashi, Kazuki Nomoto, Yuna Tanaka, Go Tsuruoka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

ChargePrint: A Framework for Internet-Scale Discovery and Security Analysis...

Tony Nasr (Concordia University), Sadegh Torabi (George Mason University), Elias Bou-Harb (University of Texas at San Antonio), Claude Fachkha (University of Dubai), Chadi Assi (Concordia University)

Read More