ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.02789
40
3

Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions

4 June 2024
Hilal Asi
Daogao Liu
Kevin Tian
ArXivPDFHTML
Abstract

We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a kthk^{\text{th}}kth-moment bound on the Lipschitz constants of sample functions rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error G2⋅1n+Gk⋅(dnϵ)1−1kG_2 \cdot \frac 1 {\sqrt n} + G_k \cdot (\frac{\sqrt d}{n\epsilon})^{1 - \frac 1 k}G2​⋅n​1​+Gk​⋅(nϵd​​)1−k1​ under (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-approximate differential privacy, up to a mild polylog(1δ)\textup{polylog}(\frac{1}{\delta})polylog(δ1​) factor, where G22G_2^2G22​ and GkkG_k^kGkk​ are the 2nd2^{\text{nd}}2nd and kthk^{\text{th}}kth moment bounds on sample Lipschitz constants, nearly-matching a lower bound of [Lowy and Razaviyayn 2023]. We further give a suite of private algorithms in the heavy-tailed setting which improve upon our basic result under additional assumptions, including an optimal algorithm under a known-Lipschitz constant assumption, a near-linear time algorithm for smooth functions, and an optimal linear time algorithm for smooth generalized linear models.

View on arXiv
Comments on this paper