ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.14154
21
9

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

27 February 2023
Hilal Asi
Vitaly Feldman
Tomer Koren
Kunal Talwar
ArXivPDFHTML
Abstract

We consider online learning problems in the realizable setting, where there is a zero-loss solution, and propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds. For the problem of online prediction from experts, we design new algorithms that obtain near-optimal regret O(ε−1log⁡1.5d){O} \big( \varepsilon^{-1} \log^{1.5}{d} \big)O(ε−1log1.5d) where ddd is the number of experts. This significantly improves over the best existing regret bounds for the DP non-realizable setting which are O(ε−1min⁡{d,T1/3log⁡d}){O} \big( \varepsilon^{-1} \min\big\{d, T^{1/3}\log d\big\} \big)O(ε−1min{d,T1/3logd}). We also develop an adaptive algorithm for the small-loss setting with regret O(L⋆log⁡d+ε−1log⁡1.5d)O(L^\star\log d + \varepsilon^{-1} \log^{1.5}{d})O(L⋆logd+ε−1log1.5d) where L⋆L^\starL⋆ is the total loss of the best expert. Additionally, we consider DP online convex optimization in the realizable setting and propose an algorithm with near-optimal regret O(ε−1d1.5)O \big(\varepsilon^{-1} d^{1.5} \big)O(ε−1d1.5), as well as an algorithm for the smooth case with regret O(ε−2/3(dT)1/3)O \big( \varepsilon^{-2/3} (dT)^{1/3} \big)O(ε−2/3(dT)1/3), both significantly improving over existing bounds in the non-realizable regime.

View on arXiv
Comments on this paper