Songze Li (Southeast University), Jiameng Cheng (Southeast University), Yiming Li (Nanyang Technological University), Xiaojun Jia (Nanyang Technological University), Dacheng Tao (Nanyang Technological University)

Multimodal large language models (MLLMs), which combine text with other modalities, such as images, have demonstrated powerful capabilities and are increasingly adopted in real-world commercial systems. However, their growing accessibility also raises concerns about misuse, such as generating harmful content. To mitigate these risks, alignment techniques are commonly applied to align model behavior with human values. Despite these efforts, recent studies have shown that jailbreak attacks can circumvent alignment and elicit unsafe outputs. Currently, most existing jailbreak methods are tailored for open-source models and exhibit limited effectiveness against commercial MLLM-integrated systems, which often employ additional filters. These filters can detect and prevent malicious input and output content, significantly reducing jailbreak threats.

In this paper, we reveal that the success of these safety filters heavily relies on a critical assumption that malicious content must be explicitly visible in either the input or the output. This assumption, while often valid for traditional LLM-integrated systems, breaks down in MLLM-integrated systems, where attackers can leverage multiple modalities to conceal adversarial intent, leading to a false sense of security in existing MLLM-integrated systems. To challenge this assumption, we propose texttt{Odysseus}, a novel jailbreak paradigm that introduces dual steganography to covertly embed malicious queries and responses into benign-looking images. Our method proceeds through four stages: textbf{(1)} malicious query encoding, textbf{(2)} steganography embedding, textbf{(3)} model interaction, and textbf{(4)} response extraction. We first encode the adversary-specified malicious prompt into binary matrices and embed them into images using a steganography model. The modified image will be fed into the victim MLLM-integrated system. We encourage the victim MLLM-integrated system to implant the generated illegitimate content into a carrier image (via steganography), which will be used for attackers to decode the hidden response locally. Extensive experiments on benchmark datasets demonstrate that our texttt{Odysseus} successfully jailbreaks several pioneering and realistic MLLM-integrated systems, including GPT-4o, Gemini-2.0-pro, Gemini-2.0-flash, and Grok-3, achieving up to 99% attack success rate. It exposes a fundamental blind spot in existing defenses, and calls for rethinking cross-modal security in MLLM-integrated systems.

View More Papers

Demystifying the Access Control Mechanism of ESXi VMKernel

Yue Liu (Southeast University), Zexiang Zhang (National University of Defense Technology), Jiaxun Zhu (Zhejiang University), Hao Zheng (Independent Researcher), Jiaqing Huang (Independent Researcher), Wenbo Shen (Zhejiang University), Gaoning Pan (Hangzhou Dianzi University), Yuliang Lu (National University of Defense Technology), Min Zhang (National University of Defense Technology), Zulie Pan (National University of Defense Technology), Guang Cheng…

Read More

Proactive Hardening of LLM Defenses with HASTE

Henry Chen (Palo Alto Networks), Victor Aranda (Palo Alto Networks), Samarth Keshari (Palo Alto Networks), Ryan Heartfield (Palo Alto Networks), Nicole Nichols (Palo Alto Networks)

Read More

Not What It Used To Be: Generational Analysis of...

Janos Szurdi (Palo Alto Networks), Reethika Ramesh (Palo Alto Networks), Ram Sundara Raman (University of California Santa Cruz), Daiping Liu (Palo Alto Networks)

Read More