399

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees and Applications to Differentially Private Optimization

Main:24 Pages
2 Figures
Bibliography:4 Pages
Appendix:1 Pages
Abstract

For a dd-dimensional log-concave distribution π(θ)ef(θ)\pi(\theta)\propto e^{-f(\theta)} on a polytope KK, we consider the problem of outputting samples from a distribution ν\nu which is O(ε)O(\varepsilon)-close in infinity-distance supθKlogν(θ)π(θ)\sup_{\theta\in K}|\log\frac{\nu(\theta)}{\pi(\theta)}| to π\pi. Such samplers with infinity-distance guarantees are specifically desired for differentially private optimization as traditional sampling algorithms which come with total-variation distance or KL divergence bounds are insufficient to guarantee differential privacy. Our main result is an algorithm that outputs a point from a distribution O(ε)O(\varepsilon)-close to π\pi in infinity-distance and requires O((md+dL2R2)×(LR+dlog(Rd+LRdεr))×mdω1)O((md+dL^2R^2)\times(LR+d\log(\frac{Rd+LRd}{\varepsilon r}))\times md^{\omega-1}) arithmetic operations, where ff is LL-Lipschitz, KK is defined by mm inequalities, is contained in a ball of radius RR and contains a ball of smaller radius rr, and ω\omega is the matrix-multiplication constant. In particular this runtime is logarithmic in 1ε\frac{1}{\varepsilon} and significantly improves on prior works. Technically, we depart from the prior works that construct Markov chains on a 1ε2\frac{1}{\varepsilon^2}-discretization of KK to achieve a sample with O(ε)O(\varepsilon) infinity-distance error, and present a method to convert continuous samples from KK with total-variation bounds to samples with infinity bounds. To achieve improved dependence on dd, we present a "soft-threshold" version of the Dikin walk which may be of independent interest. Plugging our algorithm into the framework of the exponential mechanism yields similar improvements in the running time of ε\varepsilon-pure differentially private algorithms for optimization problems such as empirical risk minimization of Lipschitz-convex functions and low-rank approximation, while still achieving the tightest known utility bounds.

View on arXiv
Comments on this paper