14
14

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees

Abstract

For a dd-dimensional log-concave distribution π(θ)ef(θ)\pi(\theta) \propto e^{-f(\theta)} constrained to a convex body KK, the problem of outputting samples from a distribution ν\nu which is ε\varepsilon-close in infinity-distance supθKlogν(θ)π(θ)\sup_{\theta \in K} |\log \frac{\nu(\theta)}{\pi(\theta)}| to π\pi arises in differentially private optimization. While sampling within total-variation distance ε\varepsilon of π\pi can be done by algorithms whose runtime depends polylogarithmically on 1ε\frac{1}{\varepsilon}, prior algorithms for sampling in ε\varepsilon infinity distance have runtime bounds that depend polynomially on 1ε\frac{1}{\varepsilon}. We bridge this gap by presenting an algorithm that outputs a point ε\varepsilon-close to π\pi in infinity distance that requires at most poly(log1ε,d)\mathrm{poly}(\log \frac{1}{\varepsilon}, d) calls to a membership oracle for KK and evaluation oracle for ff, when ff is Lipschitz. Our approach departs from prior works that construct Markov chains on a 1ε2\frac{1}{\varepsilon^2}-discretization of KK to achieve a sample with ε\varepsilon infinity-distance error, and present a method to directly convert continuous samples from KK with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension dd in the running time for the problem of sampling from a log-concave distribution on polytopes KK with infinity distance ε\varepsilon, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.

View on arXiv
Comments on this paper