38
0

IFEval-Audio: Benchmarking Instruction-Following Capability in Audio-based Large Language Models

Main:3 Pages
2 Figures
Bibliography:2 Pages
1 Tables
Appendix:2 Pages
Abstract

Large language models (LLMs) have demonstrated strong instruction-following capabilities in text-based tasks. However, this ability often deteriorates in multimodal models after alignment with non-text modalities such as images or audio. While several recent efforts have investigated instruction-following performance in text and vision-language models, instruction-following in audio-based large language models remains largely unexplored. To bridge this gap, we introduce IFEval-Audio, a novel evaluation dataset designed to assess the ability to follow instructions in an audio LLM. IFEval-Audio contains 280 audio-instruction-answer triples across six diverse dimensions: Content, Capitalization, Symbol, List Structure, Length, and Format. Each example pairs an audio input with a text instruction, requiring the model to generate an output that follows a specified structure. We benchmark state-of-the-art audio LLMs on their ability to follow audio-involved instructions. The dataset is released publicly to support future research in this emerging area.

View on arXiv
@article{gao2025_2505.16774,
  title={ IFEval-Audio: Benchmarking Instruction-Following Capability in Audio-based Large Language Models },
  author={ Yiming Gao and Bin Wang and Chengwei Wei and Shuo Sun and AiTi Aw },
  journal={arXiv preprint arXiv:2505.16774},
  year={ 2025 }
}
Comments on this paper