ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09416
94
0

Noise Conditional Variational Score Distillation

11 June 2025
Xinyu Peng
Ziyang Zheng
Yaoming Wang
Han Li
Nuowen Kan
Wenrui Dai
Chenglin Li
Junni Zou
Hongkai Xiong
    DiffM
ArXiv (abs)PDFHTML
Abstract

We propose Noise Conditional Variational Score Distillation (NCVSD), a novel method for distilling pretrained diffusion models into generative denoisers. We achieve this by revealing that the unconditional score function implicitly characterizes the score function of denoising posterior distributions. By integrating this insight into the Variational Score Distillation (VSD) framework, we enable scalable learning of generative denoisers capable of approximating samples from the denoising posterior distribution across a wide range of noise levels. The proposed generative denoisers exhibit desirable properties that allow fast generation while preserve the benefit of iterative refinement: (1) fast one-step generation through sampling from pure Gaussian noise at high noise levels; (2) improved sample quality by scaling the test-time compute with multi-step sampling; and (3) zero-shot probabilistic inference for flexible and controllable sampling. We evaluate NCVSD through extensive experiments, including class-conditional image generation and inverse problem solving. By scaling the test-time compute, our method outperforms teacher diffusion models and is on par with consistency models of larger sizes. Additionally, with significantly fewer NFEs than diffusion-based methods, we achieve record-breaking LPIPS on inverse problems.

View on arXiv
@article{peng2025_2506.09416,
  title={ Noise Conditional Variational Score Distillation },
  author={ Xinyu Peng and Ziyang Zheng and Yaoming Wang and Han Li and Nuowen Kan and Wenrui Dai and Chenglin Li and Junni Zou and Hongkai Xiong },
  journal={arXiv preprint arXiv:2506.09416},
  year={ 2025 }
}
Comments on this paper