ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20990
69
0

FinAudio: A Benchmark for Audio Large Language Models in Financial Applications

26 March 2025
Yupeng Cao
Haohang Li
Yangyang Yu
Shashidhar Reddy Javaji
Yueru He
J. Huang
Zining Zhu
Qianqian Xie
Xiao-Yang Liu
K. P. Subbalakshmi
Meikang Qiu
Sophia Ananiadou
J. Nie
    AuLLM
ArXivPDFHTML
Abstract

Audio Large Language Models (AudioLLMs) have received widespread attention and have significantly improved performance on audio tasks such as conversation, audio understanding, and automatic speech recognition (ASR). Despite these advancements, there is an absence of a benchmark for assessing AudioLLMs in financial scenarios, where audio data, such as earnings conference calls and CEO speeches, are crucial resources for financial analysis and investment decisions. In this paper, we introduce \textsc{FinAudio}, the first benchmark designed to evaluate the capacity of AudioLLMs in the financial domain. We first define three tasks based on the unique characteristics of the financial domain: 1) ASR for short financial audio, 2) ASR for long financial audio, and 3) summarization of long financial audio. Then, we curate two short and two long audio datasets, respectively, and develop a novel dataset for financial audio summarization, comprising the \textsc{FinAudio} benchmark. Then, we evaluate seven prevalent AudioLLMs on \textsc{FinAudio}. Our evaluation reveals the limitations of existing AudioLLMs in the financial domain and offers insights for improving AudioLLMs. All datasets and codes will be released.

View on arXiv
@article{cao2025_2503.20990,
  title={ FinAudio: A Benchmark for Audio Large Language Models in Financial Applications },
  author={ Yupeng Cao and Haohang Li and Yangyang Yu and Shashidhar Reddy Javaji and Yueru He and Jimin Huang and Zining Zhu and Qianqian Xie and Xiao-yang Liu and Koduvayur Subbalakshmi and Meikang Qiu and Sophia Ananiadou and Jian-Yun Nie },
  journal={arXiv preprint arXiv:2503.20990},
  year={ 2025 }
}
Comments on this paper