We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm . Our algorithms are based on a regularized exponential mechanism which samples from the density where is the empirical loss and is a regularizer which is strongly convex with respect to , generalizing a recent work of [Gopi, Lee, Liu '22] to non-Euclidean settings. We show that this mechanism satisfies Gaussian differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization) by using localization tools from convex geometry. Our framework is the first to apply to private convex optimization in general normed spaces and directly recovers non-private SCO rates achieved by mirror descent as the privacy parameter . As applications, for Lipschitz optimization in norms for all , we obtain the first optimal privacy-utility tradeoffs; for , we improve tradeoffs obtained by the recent works [Asi, Feldman, Koren, Talwar '21, Bassily, Guzman, Nandi '21] by at least a logarithmic factor. Our norm and Schatten- norm optimization frameworks are complemented with polynomial-time samplers whose query complexity we explicitly bound.
View on arXiv