17
37

Logsmooth Gradient Concentration and Tighter Runtimes for Metropolized Hamiltonian Monte Carlo

Abstract

We show that the gradient norm f(x)\|\nabla f(x)\| for xexp(f(x))x \sim \exp(-f(x)), where ff is strongly convex and smooth, concentrates tightly around its mean. This removes a barrier in the prior state-of-the-art analysis for the well-studied Metropolized Hamiltonian Monte Carlo (HMC) algorithm for sampling from a strongly logconcave distribution. We correspondingly demonstrate that Metropolized HMC mixes in O~(κd)\tilde{O}(\kappa d) iterations, improving upon the O~(κ1.5d+κd)\tilde{O}(\kappa^{1.5}\sqrt{d} + \kappa d) runtime of (Dwivedi et. al. '18, Chen et. al. '19) by a factor (κ/d)1/2(\kappa/d)^{1/2} when the condition number κ\kappa is large. Our mixing time analysis introduces several techniques which to our knowledge have not appeared in the literature and may be of independent interest, including restrictions to a nonconvex set with good conductance behavior, and a new reduction technique for boosting a constant-accuracy total variation guarantee under weak warmness assumptions. This is the first high-accuracy mixing time result for logconcave distributions using only first-order function information which achieves linear dependence on κ\kappa; we also give evidence that this dependence is likely to be necessary for standard Metropolized first-order methods.

View on arXiv
Comments on this paper