ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.02739
32
1

Denoising Diffusion Variational Inference: Diffusion Models as Expressive Variational Posteriors

5 January 2024
Wasu Top Piriyakulkij
Yingheng Wang
Volodymyr Kuleshov
    DiffM
ArXivPDFHTML
Abstract

We propose denoising diffusion variational inference (DDVI), a black-box variational inference algorithm for latent variable models which relies on diffusion models as flexible approximate posteriors. Specifically, our method introduces an expressive class of diffusion-based variational posteriors that perform iterative refinement in latent space; we train these posteriors with a novel regularized evidence lower bound (ELBO) on the marginal likelihood inspired by the wake-sleep algorithm. Our method is easy to implement (it fits a regularized extension of the ELBO), is compatible with black-box variational inference, and outperforms alternative classes of approximate posteriors based on normalizing flows or adversarial networks. We find that DDVI improves inference and learning in deep latent variable models across common benchmarks as well as on a motivating task in biology -- inferring latent ancestry from human genomes -- where it outperforms strong baselines on the Thousand Genomes dataset.

View on arXiv
@article{piriyakulkij2025_2401.02739,
  title={ Denoising Diffusion Variational Inference: Diffusion Models as Expressive Variational Posteriors },
  author={ Wasu Top Piriyakulkij and Yingheng Wang and Volodymyr Kuleshov },
  journal={arXiv preprint arXiv:2401.02739},
  year={ 2025 }
}
Comments on this paper