ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08925
10
28

Differentially Private SGD with Non-Smooth Losses

22 January 2021
Puyu Wang
Yunwen Lei
Yiming Ying
Hai Zhang
ArXivPDFHTML
Abstract

In this paper, we are concerned with differentially private {stochastic gradient descent (SGD)} algorithms in the setting of stochastic convex optimization (SCO). Most of the existing work requires the loss to be Lipschitz continuous and strongly smooth, and the model parameter to be uniformly bounded. However, these assumptions are restrictive as many popular losses violate these conditions including the hinge loss for SVM, the absolute loss in robust regression, and even the least square loss in an unbounded domain. We significantly relax these restrictive assumptions and establish privacy and generalization (utility) guarantees for private SGD algorithms using output and gradient perturbations associated with non-smooth convex losses. Specifically, the loss function is relaxed to have an α\alphaα-H\"{o}lder continuous gradient (referred to as α\alphaα-H\"{o}lder smoothness) which instantiates the Lipschitz continuity (α=0\alpha=0α=0) and the strong smoothness (α=1\alpha=1α=1). We prove that noisy SGD with α\alphaα-H\"older smooth losses using gradient perturbation can guarantee (ϵ,δ)(\epsilon,\delta)(ϵ,δ)-differential privacy (DP) and attain optimal excess population risk O(dlog⁡(1/δ)nϵ+1n)\mathcal{O}\Big(\frac{\sqrt{d\log(1/\delta)}}{n\epsilon}+\frac{1}{\sqrt{n}}\Big)O(nϵdlog(1/δ)​​+n​1​), up to logarithmic terms, with the gradient complexity O(n2−α1+α+n). \mathcal{O}( n^{2-\alpha\over 1+\alpha}+ n).O(n1+α2−α​+n). This shows an important trade-off between α\alphaα-H\"older smoothness of the loss and the computational complexity for private SGD with statistically optimal performance. In particular, our results indicate that α\alphaα-H\"older smoothness with α≥1/2\alpha\ge {1/2}α≥1/2 is sufficient to guarantee (ϵ,δ)(\epsilon,\delta)(ϵ,δ)-DP of noisy SGD algorithms while achieving optimal excess risk with the linear gradient complexity O(n).\mathcal{O}(n).O(n).

View on arXiv
Comments on this paper