50
0

Towards Robust Multimodal Large Language Models Against Jailbreak Attacks

Abstract

While multimodal large language models (MLLMs) have achieved remarkable success in recent advancements, their susceptibility to jailbreak attacks has come to light. In such attacks, adversaries exploit carefully crafted prompts to coerce models into generating harmful or undesirable content. Existing defense mechanisms often rely on external inference steps or safety alignment training, both of which are less effective and impractical when facing sophisticated adversarial perturbations in white-box scenarios. To address these challenges and bolster MLLM robustness, we introduce SafeMLLM by adopting an adversarial training framework that alternates between an attack step for generating adversarial noise and a model updating step. At the attack step, SafeMLLM generates adversarial perturbations through a newly proposed contrastive embedding attack (CoE-Attack), which optimizes token embeddings under a contrastive objective. SafeMLLM then updates model parameters to neutralize the perturbation effects while preserving model utility on benign inputs. We evaluate SafeMLLM across six MLLMs and six jailbreak methods spanning multiple modalities. Experimental results show that SafeMLLM effectively defends against diverse attacks, maintaining robust performance and utilities.

View on arXiv
@article{yin2025_2502.00653,
  title={ Towards Robust Multimodal Large Language Models Against Jailbreak Attacks },
  author={ Ziyi Yin and Yuanpu Cao and Han Liu and Ting Wang and Jinghui Chen and Fenhlong Ma },
  journal={arXiv preprint arXiv:2502.00653},
  year={ 2025 }
}
Comments on this paper