Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

BlockScope: Detecting and Investigating Propagated Vulnerabilities in Forked Blockchain...

Xiao Yi (The Chinese University of Hong Kong), Yuzhou Fang (The Chinese University of Hong Kong), Daoyuan Wu (The Chinese University of Hong Kong), Lingxiao Jiang (Singapore Management University)

Read More

Backdoor Attacks Against Dataset Distillation

Yugeng Liu (CISPA Helmholtz Center for Information Security), Zheng Li (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yun Shen (Netapp), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

RR: A Fault Model for Efficient TEE Replication

Baltasar Dinis (Instituto Superior Técnico (IST-ULisboa) / INESC-ID / MPI-SWS), Peter Druschel (MPI-SWS), Rodrigo Rodrigues (Instituto Superior Técnico (IST-ULisboa) / INESC-ID)

Read More

Paralyzing Drones via EMI Signal Injection on Sensory Communication...

Joonha Jang (KAIST), ManGi Cho (KAIST), Jaehoon Kim (KAIST), Dongkwan Kim (Samsung SDS), Yongdae Kim (KAIST)

Read More