Paolo Cerracchio, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

The evolution of vehicles has led to the integration of numerous devices that communicate via the controller area network (CAN) protocol. This protocol lacks security measures, leaving interconnected critical components vulnerable. The expansion of local and remote connectivity has increased the attack surface, heightening the risk of unauthorized intrusions. Since recent studies have proven external attacks to constitute a realworld threat to vehicle availability, driving data confidentiality, and passenger safety, researchers and car manufacturers focused on implementing effective defenses. intrusion detection systems (IDSs), frequently employing machine learning models, are a prominent solution. However, IDS are not foolproof, and attackers with knowledge of these systems can orchestrate adversarial attacks to evade detection. In this paper, we evaluate the effectiveness of popular adversarial techniques in the automotive domain to ascertain the resilience, characteristics, and vulnerabilities of several ML-based IDSs. We propose three gradient-based evasion algorithms and evaluate them against six detection systems. We find that the algorithms’ performance heavily depends on the model’s complexity and the intended attack’s quality. Also, we study the transferability between different detection systems and different time instants in the communication.

View More Papers

Enhanced Vehicular Roll-Jam Attack using a Known Noise Source

Zachary Depp, Halit Bugra Tulay, C. Emre Koksal (The Ohio State University)

Read More

Vision: “AccessFormer”: Feedback-Driven Access Control Policy

Sakuna Harinda Jayasundara, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello (University of Auckland)

Read More

REPLICAWATCHER: Training-less Anomaly Detection in Containerized Microservices

Asbat El Khairi (University of Twente), Marco Caselli (Siemens AG), Andreas Peter (University of Oldenburg), Andrea Continella (University of Twente)

Read More

Facilitating Threat Modeling by Leveraging Large Language Models

Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Read More