ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.12185
55
52

ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning

19 February 2024
Renqiu Xia
Bo-Wen Zhang
Hancheng Ye
Xiangchao Yan
Qi Liu
Hongbin Zhou
Zijun Chen
Min Dou
Botian Shi
Junchi Yan
Junchi Yan
Yu Qiao
    LRM
ArXivPDFHTML
Abstract

Recently, many versatile Multi-modal Large Language Models (MLLMs) have emerged continuously. However, their capacity to query information depicted in visual charts and engage in reasoning based on the queried contents remains under-explored. In this paper, to comprehensively and rigorously benchmark the ability of the off-the-shelf MLLMs in the chart domain, we construct ChartX, a multi-modal evaluation set covering 18 chart types, 7 chart tasks, 22 disciplinary topics, and high-quality chart data. Besides, we develop ChartVLM to offer a new perspective on handling multi-modal tasks that strongly depend on interpretable patterns, such as reasoning tasks in the field of charts or geometric images. We evaluate the chart-related ability of mainstream MLLMs and our ChartVLM on the proposed ChartX evaluation set. Extensive experiments demonstrate that ChartVLM surpasses both versatile and chart-related large models, achieving results comparable to GPT-4V. We believe that our study can pave the way for further exploration in creating a more comprehensive chart evaluation set and developing more interpretable multi-modal models. Both ChartX and ChartVLM are available at:this https URL

View on arXiv
@article{xia2025_2402.12185,
  title={ ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning },
  author={ Renqiu Xia and Bo Zhang and Hancheng Ye and Xiangchao Yan and Qi Liu and Hongbin Zhou and Zijun Chen and Peng Ye and Min Dou and Botian Shi and Junchi Yan and Yu Qiao },
  journal={arXiv preprint arXiv:2402.12185},
  year={ 2025 }
}
Comments on this paper