Simon Shigol (Ben Gurion University of the Negev), Roy Peled (Ben Gurion University of the Negev), Avishag Shapira (Ben Gurion University of the Negev), Yuval Elovici (Ben Gurion University of the Negev), Asaf Shabtai (Ben Gurion University of the Negev)

Machine learning (ML) is increasingly embedded in satellite systems, supporting both operational tasks and payload services. While ML provides greater efficiency and autonomy, it also exposes satellite systems to a new class of vulnerabilities known as adversarial ML (AML). Although AML threats have been studied extensively in other domains, their impact on satellite systems, which operate with limited power and computing resources and under latency-critical conditions, remains unexplored. This paper presents a structured risk assessment of AML threats to satellite ML applications. We review common types of cyber threats and AML techniques, providing clear definitions of AML categories and their relevance to satellite ML applications. We then map these threats to satellite operations and payloads, constructing a domain-specific framework that categorizes how adversarial attacks manifest under space conditions. Leveraging this framework, we apply a risk assessment methodology to evaluate the feasibility of attacks and their potential impact on missions. Our findings show that tasks such as anti-jamming control and telemetry-based fault detection are especially vulnerable, with integrity-focused attacks posing the most significant risk to the evaluated applications. In contrast, privacy-focused threats such as membership inference pose less risk in practice. We also suggest mitigation strategies tailored to space, including adversarial training, resilient data pipelines, and runtime monitoring. The results of our risk assessment highlight the need for further research aimed at strengthening ML security in aerospace environments and provide a foundation for the deployment of trustworthy ML in space missions.

View More Papers

Constructive Noise Defeats Adversarial Noise: Adversarial Example Detection for...

Meng Shen (Beijing Institute of Technology), Jiangyuan Bi (Beijing Institute of Technology), Hao Yu (National University of Defense Technology), Zhenming Bai (Beijing Institute of Technology), Wei Wang (Xi'an Jiaotong University), Liehuang Zhu (Beijing Institute of Technology)

Read More

Building Next-Generation Datasets for Provenance-Based Intrusion Detection

Qizhi Cai (Zhejiang University), Lingzhi Wang (Northwestern University), Yao Zhu (Zhejiang University), Zhipeng Chen (Zhejiang University), Xiangmin Shen (Hofstra University), Zhenyuan Li (Zhejiang University)

Read More

Light into Darkness: Demystifying Profit Strategies Throughout the MEV...

Feng Luo (The Hong Kong Polytechnic University), Zihao Li (The Hong Kong Polytechnic University), Wenxuan Luo (University of Electronic Science and Technology of China), Zheyuan He (University of Electronic Science and Technology of China), Xiapu Luo (The Hong Kong Polytechnic University), Zuchao Ma (The Hong Kong Polytechnic University), Shuwei Song (University of Electronic Science and…

Read More