59

InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing

Changyao Tian
Danni Yang
Guanzhou Chen
Erfei Cui
Zhaokai Wang
Yuchen Duan
Penghao Yin
Sitao Chen
Ganlin Yang
Mingxin Liu
Zirun Zhu
Ziqian Fan
Leyao Gu
Haomin Wang
Qi Wei
Jinhui Yin
Xue Yang
Zhihang Zhong
Qi Qin
Yi Xin
Bin Fu
Yihao Liu
Jiaye Ge
Qipeng Guo
Gen Luo
Hongsheng Li
Yu Qiao
Kai Chen
Hongjie Zhang
Main:39 Pages
32 Figures
Bibliography:11 Pages
27 Tables
Appendix:11 Pages
Abstract

Unified multimodal models (UMMs) that integrate understanding, reasoning, generation, and editing face inherent trade-offs between maintaining strong semantic comprehension and acquiring powerful generation capabilities. In this report, we present InternVL-U, a lightweight 4B-parameter UMM that democratizes these capabilities within a unified framework. Guided by the principles of unified contextual modeling and modality-specific modular design with decoupled visual representations, InternVL-U integrates a state-of-the-art Multimodal Large Language Model (MLLM) with a specialized MMDiT-based visual generation head. To further bridge the gap between aesthetic generation and high-level intelligence, we construct a comprehensive data synthesis pipeline targeting high-semantic-density tasks, such as text rendering and scientific reasoning, under a reasoning-centric paradigm that leverages Chain-of-Thought (CoT) to better align abstract user intent with fine-grained visual generation details. Extensive experiments demonstrate that InternVL-U achieves a superior performance - efficiency balance. Despite using only 4B parameters, it consistently outperforms unified baseline models with over 3x larger scales such as BAGEL (14B) on various generation and editing tasks, while retaining strong multimodal understanding and reasoning capabilities.

View on arXiv
Comments on this paper