105
0

AC/DC: LLM-based Audio Comprehension via Dialogue Continuation

Main:4 Pages
1 Figures
Bibliography:1 Pages
3 Tables
Abstract

We propose an instruction-following audio comprehension model that leverages the dialogue continuation ability of large language models (LLMs). Instead of directly generating target captions in training data, the proposed method trains a model to produce responses as if the input caption triggered a dialogue. This dialogue continuation training mitigates the caption variation problem. Learning to continue a dialogue effectively captures the caption's meaning beyond its surface-level words. As a result, our model enables zero-shot instruction-following capability without multitask instruction tuning, even trained solely on audio captioning datasets. Experiments on AudioCaps, WavCaps, and Clotho datasets with AudioBench audio-scene question-answering tests demonstrate our model's ability to follow various unseen instructions.

View on arXiv
@article{fujita2025_2506.10312,
  title={ AC/DC: LLM-based Audio Comprehension via Dialogue Continuation },
  author={ Yusuke Fujita and Tomoya Mizumoto and Atsushi Kojima and Lianbo Liu and Yui Sudo },
  journal={arXiv preprint arXiv:2506.10312},
  year={ 2025 }
}
Comments on this paper