Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

Preventing Kernel Hacks with HAKCs

Derrick McKee (Purdue University), Yianni Giannaris (MIT CSAIL), Carolina Ortega (MIT CSAIL), Howard Shrobe (MIT CSAIL), Mathias Payer (EPFL), Hamed Okhravi (MIT Lincoln Laboratory), Nathan Burow (MIT Lincoln Laboratory)

Read More

SoK: A Proposal for Incorporating Gamified Cybersecurity Awareness in...

June De La Cruz (INSPIRIT Lab, University of Denver), Sanchari Das (INSPIRIT Lab, University of Denver)

Read More

(Short) Spoofing Mobileye 630’s Video Camera Using a Projector

Ben Nassi, Dudi Nassi, Raz Ben Netanel and Yuval Elovici (Ben-Gurion University of the Negev)

Read More

Shipping security at scale in the Chrome browser

Adriana Porter Felt (Director of Engineering for Chrome)

Read More