43
2

HaploOmni: Unified Single Transformer for Multimodal Video Understanding and Generation

Main:1 Pages
8 Figures
6 Tables
Appendix:14 Pages
Abstract

With the advancement of language models, unified multimodal understanding and generation have made significant strides, with model architectures evolving from separated components to unified single-model frameworks. This paper explores an efficient training paradigm to build a single transformer for unified multimodal understanding and generation. Specifically, we propose a multimodal warmup strategy utilizing prior knowledge to extend capabilities. To address cross-modal compatibility challenges, we introduce feature pre-scaling and multimodal AdaLN techniques. Integrating the proposed technologies, we present the HaploOmni, a new single multimodal transformer. With limited training costs, HaploOmni achieves competitive performance across multiple image and video understanding and generation benchmarks over advanced unified models. All codes will be made public atthis https URL.

View on arXiv
@article{xiao2025_2506.02975,
  title={ HaploOmni: Unified Single Transformer for Multimodal Video Understanding and Generation },
  author={ Yicheng Xiao and Lin Song and Rui Yang and Cheng Cheng and Zunnan Xu and Zhaoyang Zhang and Yixiao Ge and Xiu Li and Ying Shan },
  journal={arXiv preprint arXiv:2506.02975},
  year={ 2025 }
}
Comments on this paper