The efficiency of a markov sampler based on the underdamped Langevin diffusion is studied for high dimensionial targets with convex and smooth potentials. We consider a classical second-order integrator which requires only one gradient computation per iteration. Contrary to previous works on similar samplers, a dimension-free contraction of Wasserstein distances and convergence rate for the total variance distance are proved for the discrete time chain itself. Non-asymptotic Wasserstein and total variation efficiency bounds and concentration inequalities are obtained for both the Metropolis adjusted and unadjusted chains. In terms of the dimension and the desired accuracy , the Wasserstein efficiency bounds are of order in the general case, if the Hessian of the potential is Lipschitz, and in the case of a separable target, in accordance with known results for other kinetic Langevin or HMC schemes.
View on arXiv