Sampling from Log-Concave Distributions with Infinity-Distance
Guarantees and Applications to Differentially Private Optimization
For a -dimensional log-concave distribution on a polytope , we consider the problem of outputting samples from a distribution which is -close in infinity-distance to . Such samplers with infinity-distance guarantees are specifically desired for differentially private optimization as traditional sampling algorithms which come with total-variation distance or KL divergence bounds are insufficient to guarantee differential privacy. Our main result is an algorithm that outputs a point from a distribution -close to in infinity-distance and requires arithmetic operations, where is -Lipschitz, is defined by inequalities, is contained in a ball of radius and contains a ball of smaller radius , and is the matrix-multiplication constant. In particular this runtime is logarithmic in and significantly improves on prior works. Technically, we depart from the prior works that construct Markov chains on a -discretization of to achieve a sample with infinity-distance error, and present a method to convert continuous samples from with total-variation bounds to samples with infinity bounds. To achieve improved dependence on , we present a "soft-threshold" version of the Dikin walk which may be of independent interest. Plugging our algorithm into the framework of the exponential mechanism yields similar improvements in the running time of -pure differentially private algorithms for optimization problems such as empirical risk minimization of Lipschitz-convex functions and low-rank approximation, while still achieving the tightest known utility bounds.
View on arXiv