5
0

Evaluating Robustness of Large Audio Language Models to Audio Injection: An Empirical Study

Abstract

Large Audio-Language Models (LALMs) are increasingly deployed in real-world applications, yet their robustness against malicious audio injection attacks remains underexplored. This study systematically evaluates five leading LALMs across four attack scenarios: Audio Interference Attack, Instruction Following Attack, Context Injection Attack, and Judgment Hijacking Attack. Using metrics like Defense Success Rate, Context Robustness Score, and Judgment Robustness Index, their vulnerabilities and resilience were quantitatively assessed. Experimental results reveal significant performance disparities among models; no single model consistently outperforms others across all attack types. The position of malicious content critically influences attack effectiveness, particularly when placed at the beginning of sequences. A negative correlation between instruction-following capability and robustness suggests models adhering strictly to instructions may be more susceptible, contrasting with greater resistance by safety-aligned models. Additionally, system prompts show mixed effectiveness, indicating the need for tailored strategies. This work introduces a benchmark framework and highlights the importance of integrating robustness into training pipelines. Findings emphasize developing multi-modal defenses and architectural designs that decouple capability from susceptibility for secure LALMs deployment.

View on arXiv
@article{hou2025_2505.19598,
  title={ Evaluating Robustness of Large Audio Language Models to Audio Injection: An Empirical Study },
  author={ Guanyu Hou and Jiaming He and Yinhang Zhou and Ji Guo and Yitong Qiao and Rui Zhang and Wenbo Jiang },
  journal={arXiv preprint arXiv:2505.19598},
  year={ 2025 }
}
Comments on this paper