20
0

Understand, Think, and Answer: Advancing Visual Reasoning with Large Multimodal Models

Abstract

Large Multimodal Models (LMMs) have recently demonstrated remarkable visual understanding performance on both vision-language and vision-centric tasks. However, they often fall short in integrating advanced, task-specific capabilities for compositional reasoning, which hinders their progress toward truly competent general vision models. To address this, we present a unified visual reasoning mechanism that enables LMMs to solve complicated compositional problems by leveraging their intrinsic capabilities (e.g. grounding and visual understanding capabilities). Different from the previous shortcut learning mechanism, our approach introduces a human-like understanding-thinking-answering process, allowing the model to complete all steps in a single pass forwarding without the need for multiple inferences or external tools. This design bridges the gap between foundational visual capabilities and general question answering, encouraging LMMs to generate faithful and traceable responses for complex visual reasoning. Meanwhile, we curate 334K visual instruction samples covering both general scenes and text-rich scenes and involving multiple foundational visual capabilities. Our trained model, Griffon-R, has the ability of end-to-end automatic understanding, self-thinking, and reasoning answers. Comprehensive experiments show that Griffon-R not only achieves advancing performance on complex visual reasoning benchmarks including VSR and CLEVR, but also enhances multimodal capabilities across various benchmarks like MMBench and ScienceQA. Data, models, and codes will be release atthis https URLsoon.

View on arXiv
@article{zhan2025_2505.20753,
  title={ Understand, Think, and Answer: Advancing Visual Reasoning with Large Multimodal Models },
  author={ Yufei Zhan and Hongyin Zhao and Yousong Zhu and Shurong Zheng and Fan Yang and Ming Tang and Jinqiao Wang },
  journal={arXiv preprint arXiv:2505.20753},
  year={ 2025 }
}
Comments on this paper