ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.01559
12
350

High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm

5 May 2016
Alain Durmus
Eric Moulines
ArXivPDFHTML
Abstract

We consider in this paper the problem of sampling a high-dimensional probability distribution π\piπ having a density with respect to the Lebesgue measure on Rd\mathbb{R}^dRd, known up to a normalization constant x↦π(x)=e−U(x)/∫Rde−U(y)dyx \mapsto \pi(x)= \mathrm{e}^{-U(x)}/\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} yx↦π(x)=e−U(x)/∫Rd​e−U(y)dy. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that UUU is continuously differentiable, ∇U\nabla U∇U is globally Lipschitz and UUU is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order 222 and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of these bounds is explicit. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are measurable and bounded. An illustration to Bayesian inference for binary regression is presented to support our claims.

View on arXiv
Comments on this paper