ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23580
37
0

DiT4SR: Taming Diffusion Transformer for Real-World Image Super-Resolution

30 March 2025
Zheng-Peng Duan
Jiawei Zhang
Xin Jin
Z. Zhang
Zheng Xiong
Dongqing Zou
Jimmy S. Ren
Chun-Le Guo
Chongyi Li
ArXivPDFHTML
Abstract

Large-scale pre-trained diffusion models are becoming increasingly popular in solving the Real-World Image Super-Resolution (Real-ISR) problem because of their rich generative priors. The recent development of diffusion transformer (DiT) has witnessed overwhelming performance over the traditional UNet-based architecture in image generation, which also raises the question: Can we adopt the advanced DiT-based diffusion model for Real-ISR? To this end, we propose our DiT4SR, one of the pioneering works to tame the large-scale DiT model for Real-ISR. Instead of directly injecting embeddings extracted from low-resolution (LR) images like ControlNet, we integrate the LR embeddings into the original attention mechanism of DiT, allowing for the bidirectional flow of information between the LR latent and the generated latent. The sufficient interaction of these two streams allows the LR stream to evolve with the diffusion process, producing progressively refined guidance that better aligns with the generated latent at each diffusion step. Additionally, the LR guidance is injected into the generated latent via a cross-stream convolution layer, compensating for DiT's limited ability to capture local information. These simple but effective designs endow the DiT model with superior performance in Real-ISR, which is demonstrated by extensive experiments. Project Page:this https URL.

View on arXiv
@article{duan2025_2503.23580,
  title={ DiT4SR: Taming Diffusion Transformer for Real-World Image Super-Resolution },
  author={ Zheng-Peng Duan and Jiawei Zhang and Xin Jin and Ziheng Zhang and Zheng Xiong and Dongqing Zou and Jimmy Ren and Chun-Le Guo and Chongyi Li },
  journal={arXiv preprint arXiv:2503.23580},
  year={ 2025 }
}
Comments on this paper