Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Learning Automated Defense Strategies Using Graph-Based Cyber Attack Simulations

Jakob Nyber, Pontus Johnson (KTH Royal Institute of Technology)

Read More

Bridging the Privacy Gap: Enhanced User Consent Mechanisms on...

Carl Magnus Bruhner (Linkoping University), David Hasselquist (Linkoping University, Sectra Communications), Niklas Carlsson (Linkoping University)

Read More

Firefly: Spoofing Earth Observation Satellite Data through Radio Overshadowing

Edd Salkield, Sebastian Köhler, Simon Birnbach, Richard Baker (University of Oxford). Martin Strohmeier (armasuisse S+T), Ivan Martinovic (University of Oxford) Presenter: Edd Salkield

Read More

Partitioning Ethereum without Eclipsing It

Hwanjo Heo (ETRI), Seungwon Woo (ETRI/KAIST), Taeung Yoon (KAIST), Min Suk Kang (KAIST), Seungwon Shin (KAIST)

Read More