Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.08783
Cited By
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance
17 February 2023
Amit Attia
Tomer Koren
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance"
10 / 10 papers shown
Title
On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions
Yusu Hong
Junhong Lin
91
13
0
06 Feb 2024
Making SGD Parameter-Free
Y. Carmon
Oliver Hinder
68
45
0
04 May 2022
High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize
Ali Kavis
Kfir Y. Levy
Volkan Cevher
50
40
0
06 Apr 2022
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance
Matthew Faw
Isidoros Tziotis
Constantine Caramanis
Aryan Mokhtari
Sanjay Shakkottai
Rachel A. Ward
55
59
0
11 Feb 2022
A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu
Yura Malitsky
P. Mertikopoulos
Volkan Cevher
ODL
53
41
0
21 Mar 2020
Lipschitz and Comparator-Norm Adaptivity in Online Learning
Zakaria Mhammedi
Wouter M. Koolen
61
56
0
27 Feb 2020
On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization
Dongruo Zhou
Yiqi Tang
Yuan Cao
Ziyan Yang
Quanquan Gu
52
151
0
16 Aug 2018
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
67
295
0
21 May 2018
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
120
1,548
0
22 Sep 2013
No More Pesky Learning Rates
Tom Schaul
Sixin Zhang
Yann LeCun
130
478
0
06 Jun 2012
1