ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.01512
52
12
v1v2 (latest)

Convergence in KL Divergence of the Inexact Langevin Algorithm with Application to Score-based Generative Models

2 November 2022
Kaylee Yingxi Yang
Andre Wibisono
ArXiv (abs)PDFHTML
Abstract

We study the Inexact Langevin Algorithm (ILA) for sampling using estimated score function when the target distribution satisfies log-Sobolev inequality (LSI), motivated by Score-based Generative Modeling (SGM). We prove a long-term convergence in Kullback-Leibler (KL) divergence under a sufficient assumption that the error of the score estimator has a bounded Moment Generating Function (MGF). Our assumption is weaker than L∞L^\inftyL∞ (which is too strong to hold in practice) and stronger than L2L^2L2 error assumption, which we show not sufficient to guarantee convergence in general. Under the L∞L^\inftyL∞ error assumption, we additionally prove convergence in R\'enyi divergence, which is stronger than KL divergence. We then study how to get a provably accurate score estimator which satisfies bounded MGF assumption for LSI target distributions, by using an estimator based on kernel density estimation. Together with the convergence results, we yield the first end-to-end convergence guarantee for ILA in the population level. Last, we generalize our convergence analysis to SGM and derive a complexity guarantee in KL divergence for data satisfying LSI under MGF-accurate score estimator.

View on arXiv
Comments on this paper