ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.14843
16
37

High Probability Convergence of Stochastic Gradient Methods

28 February 2023
Zijian Liu
Ta Duy Nguyen
Thien Hai Nguyen
Alina Ene
Huy Le Nguyen
ArXivPDFHTML
Abstract

In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. This method can be applied to the non-convex case. We demonstrate an O((1+σ2log⁡(1/δ))/T+σ/T)O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})O((1+σ2log(1/δ))/T+σ/T​) convergence rate when the number of iterations TTT is known and an O((1+σ2log⁡(T/δ))/T)O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})O((1+σ2log(T/δ))/T​) convergence rate when TTT is unknown for SGD, where 1−δ1-\delta1−δ is the desired success probability. These bounds improve over existing bounds in the literature. Additionally, we demonstrate that our techniques can be used to obtain high probability bound for AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption from previous works. Furthermore, our technique for AdaGrad-Norm extends to the standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the first noise-adapted high probability convergence for AdaGrad.

View on arXiv
Comments on this paper