Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

ETAS Best Short Paper Award Runner-Up!

The physical-world adversarial patch attack poses a security threat to AI perception models in autonomous vehicles. To mitigate this threat, researchers have designed defenses with certifiable robustness. In this paper, we survey existing certifiably robust defenses and highlight core robustness techniques that are applicable to a variety of perception tasks, including classification, detection, and segmentation. We emphasize the unsolved problems in this space to guide future research, and call for attention and efforts from both academia and industry to robustify perception models in autonomous vehicles.

View More Papers

Machine Unlearning of Features and Labels

Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Read More

The Walls Have Ears: Gauging Security Awareness in a...

Gokul Jayakrishnan, Vijayanand Banahatti, Sachin Lodha (TCS Research Tata Consultancy Services Ltd.)

Read More

Tactics, Threats & Targets: Modeling Disinformation and its Mitigation

Shujaat Mirza (New York University), Labeeba Begum (New York University Abu Dhabi), Liang Niu (New York University), Sarah Pardo (New York University Abu Dhabi), Azza Abouzied (New York University Abu Dhabi), Paolo Papotti (EURECOM), Christina Pöpper (New York University Abu Dhabi)

Read More

Your Router is My Prober: Measuring IPv6 Networks via...

Long Pan (Tsinghua University), Jiahai Yang (Tsinghua University), Lin He (Tsinghua University), Zhiliang Wang (Tsinghua University), Leyao Nie (Tsinghua University), Guanglei Song (Tsinghua University), Yaozhong Liu (Tsinghua University)

Read More