Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs
- AuLLM

Audio Large Language Models (AudioLLMs) have achieved strong results in semantic tasks like speech recognition and translation, but remain limited in modeling paralinguistic cues such as emotion. Existing approaches often treat emotion understanding as a classification problem, offering little insight into the underlying rationale behind predictions. In this work, we explore emotion reasoning, a strategy that leverages the generative capabilities of AudioLLMs to enhance emotion recognition by producing semantically aligned, evidence-grounded explanations. To support this in multitask AudioLLMs, we introduce a unified framework combining reasoning-augmented data supervision, dual-encoder architecture, and task-alternating training. This approach enables AudioLLMs to effectively learn different tasks while incorporating emotional reasoning. Experiments on IEMOCAP and MELD show that our approach not only improves emotion prediction accuracy but also enhances the coherence and evidential grounding of the generated responses.
View on arXiv@article{zhang2025_2506.06820, title={ Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs }, author={ Wenyu Zhang and Yingxu He and Geyu Lin and Zhuohan Liu and Shuo Sun and Bin Wang and Xunlong Zou and Jeremy H. M. Wong and Qiongqiong Wang and Hardik B. Sailor and Nancy F. Chen and Ai Ti Aw }, journal={arXiv preprint arXiv:2506.06820}, year={ 2025 } }