ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.08347
13
14

Private Convex Optimization in General Norms

18 July 2022
Sivakanth Gopi
Y. Lee
Daogao Liu
Ruoqi Shen
Kevin Tian
ArXivPDFHTML
Abstract

We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm ∥⋅∥\|\cdot\|∥⋅∥. Our algorithms are based on a regularized exponential mechanism which samples from the density ∝exp⁡(−k(F+μr))\propto \exp(-k(F+\mu r))∝exp(−k(F+μr)) where FFF is the empirical loss and rrr is a regularizer which is strongly convex with respect to ∥⋅∥\|\cdot\|∥⋅∥, generalizing a recent work of [Gopi, Lee, Liu '22] to non-Euclidean settings. We show that this mechanism satisfies Gaussian differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization) by using localization tools from convex geometry. Our framework is the first to apply to private convex optimization in general normed spaces and directly recovers non-private SCO rates achieved by mirror descent as the privacy parameter ϵ→∞\epsilon \to \inftyϵ→∞. As applications, for Lipschitz optimization in ℓp\ell_pℓp​ norms for all p∈(1,2)p \in (1, 2)p∈(1,2), we obtain the first optimal privacy-utility tradeoffs; for p=1p = 1p=1, we improve tradeoffs obtained by the recent works [Asi, Feldman, Koren, Talwar '21, Bassily, Guzman, Nandi '21] by at least a logarithmic factor. Our ℓp\ell_pℓp​ norm and Schatten-ppp norm optimization frameworks are complemented with polynomial-time samplers whose query complexity we explicitly bound.

View on arXiv
Comments on this paper