ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13031
12
0

MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO

19 May 2025
Yicheng Xiao
Lin Song
Yukang Chen
Yingmin Luo
Y. Chen
Yukang Gan
Wei Huang
Xiu Li
Xiaojuan Qi
Ying Shan
    LRM
ArXivPDFHTML
Abstract

Recent text-to-image systems face limitations in handling multimodal inputs and complex reasoning tasks. We introduce MindOmni, a unified multimodal large language model that addresses these challenges by incorporating reasoning generation through reinforcement learning. MindOmni leverages a three-phase training strategy: i) design of a unified vision language model with a decoder-only diffusion module, ii) supervised fine-tuning with Chain-of-Thought (CoT) instruction data, and iii) our proposed Reasoning Generation Policy Optimization (RGPO) algorithm, utilizing multimodal feedback to effectively guide policy updates. Experimental results demonstrate that MindOmni outperforms existing models, achieving impressive performance on both understanding and generation benchmarks, meanwhile showcasing advanced fine-grained reasoning generation capabilities, especially with mathematical reasoning instruction. All codes will be made public at \href{this https URL}{this https URL}.

View on arXiv
@article{xiao2025_2505.13031,
  title={ MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO },
  author={ Yicheng Xiao and Lin Song and Yukang Chen and Yingmin Luo and Yuxin Chen and Yukang Gan and Wei Huang and Xiu Li and Xiaojuan Qi and Ying Shan },
  journal={arXiv preprint arXiv:2505.13031},
  year={ 2025 }
}
Comments on this paper