Efficient Indirect LLM Jailbreak via Multimodal-LLM Jailbreak

This paper focuses on jailbreaking attacks against large language models (LLMs), eliciting them to generate objectionable content in response to harmful user queries. Unlike previous LLM-jailbreak methods that directly orient to LLMs, our approach begins by constructing a multimodal large language model (MLLM) built upon the target LLM. Subsequently, we perform an efficient MLLM jailbreak and obtain a jailbreaking embedding. Finally, we convert the embedding into a textual jailbreaking suffix to carry out the jailbreak of target LLM. Compared to the direct LLM-jailbreak methods, our indirect jailbreaking approach is more efficient, as MLLMs are more vulnerable to jailbreak than pure LLM. Additionally, to improve the attack success rate of jailbreak, we propose an image-text semantic matching scheme to identify a suitable initial input. Extensive experiments demonstrate that our approach surpasses current state-of-the-art jailbreak methods in terms of both efficiency and effectiveness. Moreover, our approach exhibits superior cross-class generalization abilities.
View on arXiv@article{niu2025_2405.20015, title={ Efficient Indirect LLM Jailbreak via Multimodal-LLM Jailbreak }, author={ Zhenxing Niu and Yuyao Sun and Haoxuan Ji and Zheng Lin and Haichang Gao and Xinbo Gao and Gang Hua and Rong Jin }, journal={arXiv preprint arXiv:2405.20015}, year={ 2025 } }