ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07963
16
0
v1v2 (latest)

Reinforcing Multimodal Understanding and Generation with Dual Self-rewards

9 June 2025
Jixiang Hong
Yiran Zhang
Guanzhong Wang
Yi Liu
Ji-Rong Wen
Rui Yan
    LRM
ArXiv (abs)PDFHTML
Main:8 Pages
10 Figures
Bibliography:6 Pages
7 Tables
Appendix:5 Pages
Abstract

Building upon large language models (LLMs), recent large multimodal models (LMMs) unify cross-model understanding and generation into a single framework. However, LMMs still struggle to achieve accurate image-text alignment, prone to generating text responses contradicting the visual input or failing to follow the text-to-image prompts. Current solutions require external supervision (e.g., human feedback or reward models) and only address unidirectional tasks-either understanding or generation. In this work, based on the observation that understanding and generation are inverse dual tasks, we introduce a self-supervised dual reward mechanism to reinforce the understanding and generation capabilities of LMMs. Specifically, we sample multiple outputs for a given input in one task domain, then reverse the input-output pairs to compute the dual likelihood of the model as self-rewards for optimization. Extensive experimental results on visual understanding and generation benchmarks demonstrate that our method can effectively enhance the performance of the model without any external supervision, especially achieving remarkable improvements in text-to-image tasks.

View on arXiv
@article{hong2025_2506.07963,
  title={ Reinforcing Multimodal Understanding and Generation with Dual Self-rewards },
  author={ Jixiang Hong and Yiran Zhang and Guanzhong Wang and Yi Liu and Ji-Rong Wen and Rui Yan },
  journal={arXiv preprint arXiv:2506.07963},
  year={ 2025 }
}
Comments on this paper