ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.04526
59
4
v1v2v3v4 (latest)

FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering

6 October 2024
Siqiao Xue
Tingting Chen
Fan Zhou
Qingyang Dai
Zhixuan Chu
Hongyuan Mei
ArXiv (abs)PDFHTML
Main:9 Pages
12 Figures
Bibliography:4 Pages
4 Tables
Appendix:14 Pages
Abstract

In this paper, we introduce FAMMA, an open-source benchmark for financial multilingual multimodal question answering (QA). Our benchmark aims to evaluate the abilities of multimodal large language models (MLLMs) in answering questions that require advanced financial knowledge and sophisticated reasoning. It includes 1,758 meticulously collected question-answer pairs from university textbooks and exams, spanning 8 major subfields in finance including corporate finance, asset management, and financial engineering. Some of the QA pairs are written in Chinese or French, while a majority of them are in English. These questions are presented in a mixed format combining text and heterogeneous image types, such as charts, tables, and diagrams. We evaluate a range of state-of-the-art MLLMs on our benchmark, and our analysis shows that FAMMA poses a significant challenge for these models. Even advanced systems like GPT-4o and Claude-35-Sonnet achieve only 42\% accuracy. Additionally, the open-source Qwen2-VL lags notably behind its proprietary counterparts. Lastly, we explore GPT o1-style reasoning chains to enhance the models' reasoning capabilities, which significantly improve error correction. Our FAMMA benchmark will facilitate future research to develop expert systems in financial QA. The leaderboard is available at https://famma-bench.github.io/famma/ .

View on arXiv
@article{xue2025_2410.04526,
  title={ FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering },
  author={ Siqiao Xue and Xiaojing Li and Fan Zhou and Qingyang Dai and Zhixuan Chu and Hongyuan Mei },
  journal={arXiv preprint arXiv:2410.04526},
  year={ 2025 }
}
Comments on this paper