28
0

Improve MLLM Benchmark Efficiency through Interview

Main:6 Pages
5 Figures
Bibliography:1 Pages
3 Tables
Abstract

The rapid development of Multimodal Large Language Models (MLLM) has led to a wide range of MLLM applications, and a number of benchmark datasets have sprung up in order to assess MLLM abilities. However, full-coverage Q&A testing on large-scale data is resource-intensive and time-consuming. To address this issue, we propose the MLLM Interview (MITV) strategy, which aims to quickly obtain MLLM performance metrics by quizzing fewer question. First, First, we constructed the interview dataset, which was built on an existing MLLM assessment dataset, by adding difficulty labels based on the performance of some typical MLLMs in this dataset. Second, we propose an MLLM Interview strategy, which obtains an initial performance situation of the large model by quizzing a small number of topics and then continuously tries to test the model's limits. Through extensive experiments, the result shows that the MITV strategy proposed in this paper performs well on MLLM benchmark datasets, and it is able to obtain the model evaluation capability faster through a small number of questions and answers.

View on arXiv
@article{wen2025_2506.00883,
  title={ Improve MLLM Benchmark Efficiency through Interview },
  author={ Farong Wen and Yijin Guo and Junying Wang and Jiaohao Xiao and Yingjie Zhou and Chunyi Li and Zicheng Zhang and Guangtao Zhai },
  journal={arXiv preprint arXiv:2506.00883},
  year={ 2025 }
}
Comments on this paper