ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.03077
19
0

Stochastic Langevin Monte Carlo for (weakly) log-concave posterior distributions

8 January 2023
Marelys Crespo Navas
S. Gadat
X. Gendre
ArXivPDFHTML
Abstract

In this paper, we investigate a continuous time version of the Stochastic Langevin Monte Carlo method, introduced in [WT11], that incorporates a stochastic sampling step inside the traditional over-damped Langevin diffusion. This method is popular in machine learning for sampling posterior distribution. We will pay specific attention in our work to the computational cost in terms of nnn (the number of observations that produces the posterior distribution), and ddd (the dimension of the ambient space where the parameter of interest is living). We derive our analysis in the weakly convex framework, which is parameterized with the help of the Kurdyka-\L ojasiewicz (KL) inequality, that permits to handle a vanishing curvature settings, which is far less restrictive when compared to the simple strongly convex case. We establish that the final horizon of simulation to obtain an ε\varepsilonε approximation (in terms of entropy) is of the order (dlog⁡(n)2)(1+r)2[log⁡2(ε−1)+n2d2(1+r)log⁡4(1+r)(n)]( d \log(n)^2 )^{(1+r)^2} [\log^2(\varepsilon^{-1}) + n^2 d^{2(1+r)} \log^{4(1+r)}(n) ](dlog(n)2)(1+r)2[log2(ε−1)+n2d2(1+r)log4(1+r)(n)] with a Poissonian subsampling of parameter (n(dlog⁡2(n))1+r)−1\left(n ( d \log^2(n))^{1+r}\right)^{-1}(n(dlog2(n))1+r)−1, where the parameter rrr is involved in the KL inequality and varies between 000 (strongly convex case) and 111 (limiting Laplace situation).

View on arXiv
Comments on this paper