ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.14843
  4. Cited By
High Probability Convergence of Stochastic Gradient Methods

High Probability Convergence of Stochastic Gradient Methods

28 February 2023
Zijian Liu
Ta Duy Nguyen
Thien Hai Nguyen
Alina Ene
Huy Le Nguyen
ArXivPDFHTML

Papers citing "High Probability Convergence of Stochastic Gradient Methods"

12 / 12 papers shown
Title
Nonlinear Stochastic Gradient Descent and Heavy-tailed Noise: A Unified Framework and High-probability Guarantees
Nonlinear Stochastic Gradient Descent and Heavy-tailed Noise: A Unified Framework and High-probability Guarantees
Aleksandar Armacki
Shuhua Yu
Pranay Sharma
Gauri Joshi
Dragana Bajović
D. Jakovetić
S. Kar
57
2
0
17 Oct 2024
Differential Private Stochastic Optimization with Heavy-tailed Data:
  Towards Optimal Rates
Differential Private Stochastic Optimization with Heavy-tailed Data: Towards Optimal Rates
Puning Zhao
Xiaogang Xu
Zhe Liu
Chong Wang
Rongfei Fan
Qingming Li
48
1
0
19 Aug 2024
Faster Stochastic Optimization with Arbitrary Delays via Asynchronous
  Mini-Batching
Faster Stochastic Optimization with Arbitrary Delays via Asynchronous Mini-Batching
Amit Attia
Ofir Gaash
Tomer Koren
40
0
0
14 Aug 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
39
3
0
22 May 2024
On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions
On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions
Yusu Hong
Junhong Lin
46
11
0
06 Feb 2024
How Free is Parameter-Free Stochastic Optimization?
How Free is Parameter-Free Stochastic Optimization?
Amit Attia
Tomer Koren
ODL
47
4
0
05 Feb 2024
General Tail Bounds for Non-Smooth Stochastic Mirror Descent
General Tail Bounds for Non-Smooth Stochastic Mirror Descent
Khaled Eldowa
Andrea Paudice
21
4
0
12 Dec 2023
A Large Deviations Perspective on Policy Gradient Algorithms
A Large Deviations Perspective on Policy Gradient Algorithms
Wouter Jongeneel
Daniel Kuhn
Mengmeng Li
31
1
0
13 Nov 2023
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to
  Unknown Parameters, Unbounded Gradients and Affine Variance
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance
Amit Attia
Tomer Koren
ODL
22
25
0
17 Feb 2023
On the Convergence of AdaGrad(Norm) on $\R^{d}$: Beyond Convexity,
  Non-Asymptotic Rate and Acceleration
On the Convergence of AdaGrad(Norm) on Rd\R^{d}Rd: Beyond Convexity, Non-Asymptotic Rate and Acceleration
Zijian Liu
Ta Duy Nguyen
Alina Ene
Huy Le Nguyen
41
6
0
29 Sep 2022
A High Probability Analysis of Adaptive SGD with Momentum
A High Probability Analysis of Adaptive SGD with Momentum
Xiaoyun Li
Francesco Orabona
92
65
0
28 Jul 2020
A Simple Convergence Proof of Adam and Adagrad
A Simple Convergence Proof of Adam and Adagrad
Alexandre Défossez
Léon Bottou
Francis R. Bach
Nicolas Usunier
56
144
0
05 Mar 2020
1