15
0

MANBench: Is Your Multimodal Model Smarter than Human?

Main:8 Pages
20 Figures
Bibliography:2 Pages
5 Tables
Appendix:17 Pages
Abstract

The rapid advancement of Multimodal Large Language Models (MLLMs) has ignited discussions regarding their potential to surpass human performance in multimodal tasks. In response, we introduce MANBench (Multimodal Ability Norms Benchmark), a bilingual benchmark (English and Chinese) comprising 1,314 questions across nine tasks, spanning knowledge-based and non-knowledge-based domains. MANBench emphasizes intuitive reasoning, seamless cross-modal integration, and real-world complexity, providing a rigorous evaluation framework.Through extensive human experiments involving diverse participants, we compared human performance against state-of-the-art MLLMs. The results indicate that while MLLMs excel in tasks like Knowledge and Text-Image Understanding, they struggle with deeper cross-modal reasoning tasks such as Transmorphic Understanding, Image Consistency, and Multi-image Understanding. Moreover, both humans and MLLMs face challenges in highly complex tasks like Puzzles and Spatial Imagination.MANBench highlights the strengths and limitations of MLLMs, revealing that even advanced models fall short of achieving human-level performance across many domains. We hope MANBench will inspire efforts to bridge the gap between MLLMs and human multimodal capabilities. The code and dataset are available atthis https URL.

View on arXiv
@article{zhou2025_2506.11080,
  title={ MANBench: Is Your Multimodal Model Smarter than Human? },
  author={ Han Zhou and Qitong Xu and Yiheng Dong and Xin Yang },
  journal={arXiv preprint arXiv:2506.11080},
  year={ 2025 }
}
Comments on this paper