89
5

MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart Problems

Abstract

Multimodal Large Language Models (MLLMs) have demonstrated impressive abilities across various tasks, including visual question answering and chart comprehension, yet existing benchmarks for chart-related tasks fall short in capturing the complexity of real-world multi-chart scenarios. Current benchmarks primarily focus on single-chart tasks, neglecting the multi-hop reasoning required to extract and integrate information from multiple charts, which is essential in practical applications. To fill this gap, we introduce MultiChartQA, a benchmark that evaluates MLLMs' capabilities in four key areas: direct question answering, parallel question answering, comparative reasoning, and sequential reasoning. Our evaluation of a wide range of MLLMs reveals significant performance gaps compared to humans. These results highlight the challenges in multi-chart comprehension and the potential of MultiChartQA to drive advancements in this field. Our code and data are available atthis https URL

View on arXiv
@article{zhu2025_2410.14179,
  title={ MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart Problems },
  author={ Zifeng Zhu and Mengzhao Jia and Zhihan Zhang and Lang Li and Meng Jiang },
  journal={arXiv preprint arXiv:2410.14179},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.