7
0

Restoration Score Distillation: From Corrupted Diffusion Pretraining to One-Step High-Quality Generation

Abstract

Learning generative models from corrupted data is a fundamental yet persistently challenging task across scientific disciplines, particularly when access to clean data is limited or expensive. Denoising Score Distillation (DSD) \cite{chen2025denoising} recently introduced a novel and surprisingly effective strategy that leverages score distillation to train high-fidelity generative models directly from noisy observations. Building upon this foundation, we propose \textit{Restoration Score Distillation} (RSD), a principled generalization of DSD that accommodates a broader range of corruption types, such as blurred, incomplete, or low-resolution images. RSD operates by first pretraining a teacher diffusion model solely on corrupted data and subsequently distilling it into a single-step generator that produces high-quality reconstructions. Empirically, RSD consistently surpasses its teacher model across diverse restoration tasks on both natural and scientific datasets. Moreover, beyond standard diffusion objectives, the RSD framework is compatible with several corruption-aware training techniques such as Ambient Tweedie, Ambient Diffusion, and its Fourier-space variant, enabling flexible integration with recent advances in diffusion modeling. Theoretically, we demonstrate that in a linear regime, RSD recovers the eigenspace of the clean data covariance matrix from linear measurements, thereby serving as an implicit regularizer. This interpretation recasts score distillation not only as a sampling acceleration technique but as a principled approach to enhancing generative performance in severely degraded data regimes.

View on arXiv
@article{zhang2025_2505.13377,
  title={ Restoration Score Distillation: From Corrupted Diffusion Pretraining to One-Step High-Quality Generation },
  author={ Yasi Zhang and Tianyu Chen and Zhendong Wang and Ying Nian Wu and Mingyuan Zhou and Oscar Leong },
  journal={arXiv preprint arXiv:2505.13377},
  year={ 2025 }
}
Comments on this paper