As Multimodal Large Language Models (MLLMs) continue to evolve, their cognitive and reasoning capabilities have seen remarkable progress. However, challenges in visual fine-grained perception and commonsense causal inference persist. This paper introduces Argus Inspection, a multimodal benchmark with two levels of difficulty, emphasizing detailed visual recognition while incorporating real-world commonsense understanding to evaluate causal reasoning abilities. Expanding on it, we present the Eye of Panoptes framework, which integrates a binary parametric Sigmoid metric with an indicator function, enabling a more holistic evaluation of MLLMs' responses in opinion-based reasoning tasks. Experiments conducted on 26 mainstream MLLMs reveal that the highest performance in visual fine-grained reasoning reaches only 0.46, highlighting considerable potential for enhancement. Our research offers valuable perspectives for the continued refinement of MLLMs.
View on arXiv@article{yao2025_2506.14805, title={ Argus Inspection: Do Multimodal Large Language Models Possess the Eye of Panoptes? }, author={ Yang Yao and Lingyu Li and Jiaxin Song and Chiyu Chen and Zhenqi He and Yixu Wang and Xin Wang and Tianle Gu and Jie Li and Yan Teng and Yingchun Wang }, journal={arXiv preprint arXiv:2506.14805}, year={ 2025 } }