Ahmed Abdo, Sakib Md Bin Malek, Xuanpeng Zhao, Nael Abu-Ghazaleh (University of California, Riverside)

ZOOX AutoDriving Security Award Winner ($1,000 cash prize)!

Autonomous systems are vulnerable to physical attacks that manipulate their sensors through spoofing or other adversarial inputs or interference. If the sensors’ values are incorrect, an autonomous system can be directed to malfunction or even controlled to perform an adversary-chosen action, making this a critical threat to the success of these systems. To counter these attacks, a number of prior defenses were proposed that compare the collected sensor values to those predicted by a physics based model of the vehicle dynamics; these solutions can be limited by the accuracy of this prediction which can leave room for an attacker to operate without being detected. We propose AVMON, which contributes a new detector that substantially improves detection accuracy, using the following ideas: (1) Training and specialization of an estimation filter configuration to the vehicle and environment dynamics; (2) Efficiently overcoming errors due to non-linearities, and capturing some effects outside the physics model, using a residual machine learning estimator; and (3) A change detection algorithm for keeping track of the behavior of the sensors to enable more accurate filtering of transients. These ideas together enable both efficient and high accuracy estimation of the physical state of the vehicle, which substantially shrinks the attacker’s opportunity to manipulate the sensor data without detection. We show that AVMON can detect a wide range of attacks, with low overhead compatible with realtime implementations. We demonstrate AVMON for both ground vehicles (using an RC Car testbed) and for aerial drones (using hardware in the loop simulator), as well as in simulations.

View More Papers

Make your IoT environments robust against adversarial machine learning...

Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

Read More

WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on...

Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

File Hijacking Vulnerability: The Elephant in the Room

Chendong Yu (Institute of Information Engineering, Chinese Academy of Sciences and School of Cyber Security, University of Chinese Academy of Sciences), Yang Xiao (Institute of Information Engineering, Chinese Academy of Sciences and School of Cyber Security, University of Chinese Academy of Sciences), Jie Lu (Institute of Computing Technology of the Chinese Academy of Sciences), Yuekang…

Read More

MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models

Qi Pang (Carnegie Mellon University), Yuanyuan Yuan (HKUST), Shuai Wang (HKUST)

Read More