JALMBench: Benchmarking Jailbreak Vulnerabilities in Audio Language Models
Audio Language Models (ALMs) have made significant progress recently. These models integrate the audio modality directly into the model, rather than converting speech into text and inputting text to Large Language Models (LLMs). While jailbreak attacks on LLMs have been extensively studied, the security of ALMs with audio modalities remains largely unexplored. Currently, there is a lack of an adversarial audio dataset and a unified framework specifically designed to evaluate and compare attacks and ALMs. In this paper, we present JALMBench, the \textit{first} comprehensive benchmark to assess the safety of ALMs against jailbreak attacks. JALMBench includes a dataset containing 2,200 text samples and 51,381 audio samples with over 268 hours. It supports 12 mainstream ALMs, 4 text-transferred and 4 audio-originated attack methods, and 5 defense methods. Using JALMBench, we provide an in-depth analysis of attack efficiency, topic sensitivity, voice diversity, and attack representations. Additionally, we explore mitigation strategies for the attacks at both the prompt level and the response level.
View on arXiv@article{peng2025_2505.17568, title={ JALMBench: Benchmarking Jailbreak Vulnerabilities in Audio Language Models }, author={ Zifan Peng and Yule Liu and Zhen Sun and Mingchen Li and Zeren Luo and Jingyi Zheng and Wenhan Dong and Xinlei He and Xuechao Wang and Yingjie Xue and Shengmin Xu and Xinyi Huang }, journal={arXiv preprint arXiv:2505.17568}, year={ 2025 } }