29
11

Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting

Abstract

We consider the problem of sampling from a high-dimensional target distribution πβ\pi_\beta on Rd\mathbb{R}^d with density proportional to θeβU(θ)\theta\mapsto e^{-\beta U(\theta)} using explicit numerical schemes based on discretising the Langevin stochastic differential equation (SDE). In recent literature, taming has been proposed and studied as a method for ensuring stability of Langevin-based numerical schemes in the case of super-linearly growing drift coefficients for the Langevin SDE. In particular, the Tamed Unadjusted Langevin Algorithm (TULA) was proposed in [Bro+19] to sample from such target distributions with the gradient of the potential UU being super-linearly growing. However, theoretical guarantees in Wasserstein distances for Langevin-based algorithms have traditionally been derived assuming strong convexity of the potential UU. In this paper, we propose a novel taming factor and derive, under a setting with possibly non-convex potential UU and super-linearly growing gradient of UU, non-asymptotic theoretical bounds in Wasserstein-1 and Wasserstein-2 distances between the law of our algorithm, which we name the modified Tamed Unadjusted Langevin Algorithm (mTULA), and the target distribution πβ\pi_\beta. We obtain respective rates of convergence O(λ)\mathcal{O}(\lambda) and O(λ1/2)\mathcal{O}(\lambda^{1/2}) in Wasserstein-1 and Wasserstein-2 distances for the discretisation error of mTULA in step size λ\lambda. High-dimensional numerical simulations which support our theoretical findings are presented to showcase the applicability of our algorithm.

View on arXiv
Comments on this paper