ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07903
12
0
v1v2 (latest)

Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

9 June 2025
Kevin Rojas
Yuchen Zhu
Sichen Zhu
Felix X.-F. Ye
Molei Tao
    DiffM
ArXiv (abs)PDFHTML
Main:8 Pages
12 Figures
Bibliography:5 Pages
10 Tables
Appendix:19 Pages
Abstract

Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance.

View on arXiv
@article{rojas2025_2506.07903,
  title={ Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces },
  author={ Kevin Rojas and Yuchen Zhu and Sichen Zhu and Felix X.-F. Ye and Molei Tao },
  journal={arXiv preprint arXiv:2506.07903},
  year={ 2025 }
}
Comments on this paper