52
66

Non-Euclidean Differentially Private Stochastic Convex Optimization: Optimal Rates in Linear Time

Abstract

Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function, given a dataset of nn i.i.d. samples from a distribution, while satisfying differential privacy with respect to the dataset. Most of the existing works in the literature of private convex optimization focus on the Euclidean (i.e., 2\ell_2) setting, where the loss is assumed to be Lipschitz (and possibly smooth) w.r.t. the 2\ell_2 norm over a constraint set with bounded 2\ell_2 diameter. Algorithms based on noisy stochastic gradient descent (SGD) are known to attain the optimal excess risk in this setting. In this work, we conduct a systematic study of DP-SCO for p\ell_p-setups under a standard smoothness assumption on the loss. For 1<p21< p\leq 2, under a standard smoothness assumption, we give a new, linear-time DP-SCO algorithm with optimal excess risk. Previously known constructions with optimal excess risk for 1<p<21< p <2 run in super-linear time in nn. For p=1p=1, we give an algorithm with nearly optimal excess risk. Our result for the 1\ell_1-setup also extends to general polyhedral norms and feasible sets. Moreover, we show that the excess risk bounds resulting from our algorithms for 1p21\leq p \leq 2 are attained with high probability. For 2<p2 < p \leq \infty, we show that existing linear-time constructions for the Euclidean setup attain a nearly optimal excess risk in the low-dimensional regime. As a consequence, we show that such constructions attain a nearly optimal excess risk for p=p=\infty. Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.

View on arXiv
Comments on this paper