Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.11204
Cited By
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
26 January 2022
Ruinan Jin
Yu Xing
Xingkang He
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Convergence of mSGD and AdaGrad for Stochastic Optimization"
8 / 8 papers shown
Title
Shuffling Momentum Gradient Algorithm for Convex Optimization
Trang H. Tran
Quoc Tran-Dinh
Lam M. Nguyen
32
1
0
05 Mar 2024
Accelerated Convergence of Stochastic Heavy Ball Method under Anisotropic Gradient Noise
Rui Pan
Yuxing Liu
Xiaoyu Wang
Tong Zhang
26
5
0
22 Dec 2023
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
26
0
0
19 Oct 2023
Acceleration of stochastic gradient descent with momentum by averaging: finite-sample rates and asymptotic normality
Kejie Tang
Weidong Liu
Yichen Zhang
Xi Chen
21
2
0
28 May 2023
Adam Can Converge Without Any Modification On Update Rules
Yushun Zhang
Congliang Chen
Naichen Shi
Ruoyu Sun
Zhimin Luo
18
63
0
20 Aug 2022
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance
Matthew Faw
Isidoros Tziotis
C. Caramanis
Aryan Mokhtari
Sanjay Shakkottai
Rachel A. Ward
16
59
0
11 Feb 2022
A High Probability Analysis of Adaptive SGD with Momentum
Xiaoyun Li
Francesco Orabona
92
66
0
28 Jul 2020
A Simple Convergence Proof of Adam and Adagrad
Alexandre Défossez
Léon Bottou
Francis R. Bach
Nicolas Usunier
56
144
0
05 Mar 2020
1