ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06282
20
0

Understanding Financial Reasoning in AI: A Multimodal Benchmark and Error Learning Approach

22 April 2025
Shuangyan Deng
Haizhou Peng
Jiachen Xu
Chunhou Liu
Ciprian Doru Giurcuaneanu
Jiamou Liu
    AIFin
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:3 Pages
6 Tables
Appendix:3 Pages
Abstract

Effective financial reasoning demands not only textual understanding but also the ability to interpret complex visual data such as charts, tables, and trend graphs. This paper introduces a new benchmark designed to evaluate how well AI models - especially large language and multimodal models - reason in finance-specific contexts. Covering 3,200 expert-level question-answer pairs across 15 core financial topics, the benchmark integrates both textual and visual modalities to reflect authentic analytical challenges in finance. To address limitations in current reasoning approaches, we propose an error-aware learning framework that leverages historical model mistakes and feedback to guide inference, without requiring fine-tuning. Our experiments across state-of-the-art models show that multimodal inputs significantly enhance performance and that incorporating error feedback leads to consistent and measurable improvements. The results highlight persistent challenges in visual understanding and mathematical logic, while also demonstrating the promise of self-reflective reasoning in financial AI systems. Our code and data can be found at https://anonymous/FinMR/CodeData.

View on arXiv
@article{deng2025_2506.06282,
  title={ Understanding Financial Reasoning in AI: A Multimodal Benchmark and Error Learning Approach },
  author={ Shuangyan Deng and Haizhou Peng and Jiachen Xu and Chunhou Liu and Ciprian Doru Giurcuaneanu and Jiamou Liu },
  journal={arXiv preprint arXiv:2506.06282},
  year={ 2025 }
}
Comments on this paper