30
1

Evaluating MLLMs with Multimodal Multi-image Reasoning Benchmark

Abstract

With enhanced capabilities and widespread applications, Multimodal Large Language Models (MLLMs) are increasingly required to process and reason over multiple images simultaneously. However, existing MLLM benchmarks focus either on single-image visual reasoning or on multi-image understanding tasks with only final-answer evaluation, leaving the reasoning capabilities of MLLMs over multi-image inputs largely underexplored. To address this gap, we introduce the Multimodal Multi-image Reasoning Benchmark (MMRB)\textbf{Multimodal Multi-image Reasoning Benchmark (MMRB)}, the first benchmark designed to evaluate structured visual reasoning across multiple images. MMRB comprises 92 sub-tasks\textbf{92 sub-tasks} covering spatial, temporal, and semantic reasoning, with multi-solution, CoT-style annotations generated by GPT-4o and refined by human experts. A derivative subset is designed to evaluate multimodal reward models in multi-image scenarios. To support fast and scalable evaluation, we propose a sentence-level matching framework using open-source LLMs. Extensive baseline experiments on 40 MLLMs\textbf{40 MLLMs}, including 9 reasoning-specific models and 8 reward models, demonstrate that open-source MLLMs still lag significantly behind commercial MLLMs in multi-image reasoning tasks. Furthermore, current multimodal reward models are nearly incapable of handling multi-image reward ranking tasks.

View on arXiv
@article{cheng2025_2506.04280,
  title={ Evaluating MLLMs with Multimodal Multi-image Reasoning Benchmark },
  author={ Ziming Cheng and Binrui Xu and Lisheng Gong and Zuhe Song and Tianshuo Zhou and Shiqi Zhong and Siyu Ren and Mingxiang Chen and Xiangchao Meng and Yuxin Zhang and Yanlin Li and Lei Ren and Wei Chen and Zhiyuan Huang and Mingjie Zhan and Xiaojie Wang and Fangxiang Feng },
  journal={arXiv preprint arXiv:2506.04280},
  year={ 2025 }
}
Comments on this paper