44
2

Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens

Abstract

Recent endeavors in Multimodal Large Language Models (MLLMs) aim to unify visual comprehension and generation by combining LLM and diffusion models, the state-of-the-art in each task, respectively. Existing approaches rely on spatial visual tokens, where image patches are encoded and arranged according to a spatial order (e.g., raster scan). However, we show that spatial tokens lack the recursive structure inherent to languages, hence form an impossible language for LLM to master. In this paper, we build a proper visual language by leveraging diffusion timesteps to learn discrete, recursive visual tokens. Our proposed tokens recursively compensate for the progressive attribute loss in noisy images as timesteps increase, enabling the diffusion model to reconstruct the original image at any timestep. This approach allows us to effectively integrate the strengths of LLMs in autoregressive reasoning and diffusion models in precise image generation, achieving seamless multimodal comprehension and generation within a unified framework. Extensive experiments show that we achieve superior performance for multimodal comprehension and generation simultaneously compared with other MLLMs. Project Page:this https URL.

View on arXiv
@article{pan2025_2504.14666,
  title={ Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens },
  author={ Kaihang Pan and Wang Lin and Zhongqi Yue and Tenglong Ao and Liyu Jia and Wei Zhao and Juncheng Li and Siliang Tang and Hanwang Zhang },
  journal={arXiv preprint arXiv:2504.14666},
  year={ 2025 }
}
Comments on this paper