Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Power to the Data Defenders: Human-Centered Disclosure Risk Calibration...

Kaustav Bhattacharjee, Aritra Dasgupta (New Jersey Institute of Technology)

Read More

WIP: The Feasibility of High-performance Message Authentication in Automotive...

Evan Allen (Virginia Tech), Zeb Bowden (Virginia Tech Transportation Institute), Randy Marchany (Virginia Tech), J. Scot Ransbottom (Virginia Tech)

Read More

Breaking and Fixing Virtual Channels: Domino Attack and Donner

Lukas Aumayr (TU Wien), Pedro Moreno-Sanchez (IMDEA Software Institute), Aniket Kate (Purdue University / Supra), Matteo Maffei (Christian Doppler Laboratory Blockchain Technologies for the Internet of Things / TU Wien)

Read More

Double and Nothing: Understanding and Detecting Cryptocurrency Giveaway Scams

Xigao Li (Stony Brook University), Anurag Yepuri (Stony Brook University), Nick Nikiforakis (Stony Brook University)

Read More