ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24406
38
0

IRBridge: Solving Image Restoration Bridge with Pre-trained Generative Diffusion Models

30 May 2025
Hanting Wang
Tao Jin
Wang Lin
Shulei Wang
Hai Huang
Shengpeng Ji
Zhou Zhao
    DiffM
ArXiv (abs)PDFHTML
Main:7 Pages
26 Figures
Bibliography:4 Pages
4 Tables
Appendix:12 Pages
Abstract

Bridge models in image restoration construct a diffusion process from degraded to clear images. However, existing methods typically require training a bridge model from scratch for each specific type of degradation, resulting in high computational costs and limited performance. This work aims to efficiently leverage pretrained generative priors within existing image restoration bridges to eliminate this requirement. The main challenge is that standard generative models are typically designed for a diffusion process that starts from pure noise, while restoration tasks begin with a low-quality image, resulting in a mismatch in the state distributions between the two processes. To address this challenge, we propose a transition equation that bridges two diffusion processes with the same endpoint distribution. Based on this, we introduce the IRBridge framework, which enables the direct utilization of generative models within image restoration bridges, offering a more flexible and adaptable approach to image restoration. Extensive experiments on six image restoration tasks demonstrate that IRBridge efficiently integrates generative priors, resulting in improved robustness and generalization performance. Code will be available at GitHub.

View on arXiv
@article{wang2025_2505.24406,
  title={ IRBridge: Solving Image Restoration Bridge with Pre-trained Generative Diffusion Models },
  author={ Hanting Wang and Tao Jin and Wang Lin and Shulei Wang and Hai Huang and Shengpeng Ji and Zhou Zhao },
  journal={arXiv preprint arXiv:2505.24406},
  year={ 2025 }
}
Comments on this paper