ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13436
52
4

Unified Autoregressive Visual Generation and Understanding with Continuous Tokens

17 March 2025
Lijie Fan
Luming Tang
Siyang Qin
Tianhong Li
Xuan S. Yang
Siyuan Qiao
Andreas Steiner
Chen Sun
Yuanzhen Li
Tao Zhu
Michael Rubinstein
Michalis Raptis
Deqing Sun
Radu Soricut
ArXivPDFHTML
Abstract

We present UniFluid, a unified autoregressive framework for joint visual generation and understanding leveraging continuous visual tokens. Our unified autoregressive architecture processes multimodal image and text inputs, generating discrete tokens for text and continuous tokens for image. We find though there is an inherent trade-off between the image generation and understanding task, a carefully tuned training recipe enables them to improve each other. By selecting an appropriate loss balance weight, the unified model achieves results comparable to or exceeding those of single-task baselines on both tasks. Furthermore, we demonstrate that employing stronger pre-trained LLMs and random-order generation during training is important to achieve high-fidelity image generation within this unified framework. Built upon the Gemma model series, UniFluid exhibits competitive performance across both image generation and understanding, demonstrating strong transferability to various downstream tasks, including image editing for generation, as well as visual captioning and question answering for understanding.

View on arXiv
@article{fan2025_2503.13436,
  title={ Unified Autoregressive Visual Generation and Understanding with Continuous Tokens },
  author={ Lijie Fan and Luming Tang and Siyang Qin and Tianhong Li and Xuan Yang and Siyuan Qiao and Andreas Steiner and Chen Sun and Yuanzhen Li and Tao Zhu and Michael Rubinstein and Michalis Raptis and Deqing Sun and Radu Soricut },
  journal={arXiv preprint arXiv:2503.13436},
  year={ 2025 }
}
Comments on this paper