ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10046
19
0

Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis

15 May 2025
Bingda Tang
Boyang Zheng
Xichen Pan
Sayak Paul
Saining Xie
ArXivPDFHTML
Abstract

This paper does not describe a new method; instead, it provides a thorough exploration of an important yet understudied design space related to recent advances in text-to-image synthesis -- specifically, the deep fusion of large language models (LLMs) and diffusion transformers (DiTs) for multi-modal generation. Previous studies mainly focused on overall system performance rather than detailed comparisons with alternative methods, and key design details and training recipes were often left undisclosed. These gaps create uncertainty about the real potential of this approach. To fill these gaps, we conduct an empirical study on text-to-image generation, performing controlled comparisons with established baselines, analyzing important design choices, and providing a clear, reproducible recipe for training at scale. We hope this work offers meaningful data points and practical guidelines for future research in multi-modal generation.

View on arXiv
@article{tang2025_2505.10046,
  title={ Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis },
  author={ Bingda Tang and Boyang Zheng and Xichen Pan and Sayak Paul and Saining Xie },
  journal={arXiv preprint arXiv:2505.10046},
  year={ 2025 }
}
Comments on this paper