ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07002
  4. Cited By
On the Last Iterate Convergence of Momentum Methods
v1v2v3 (latest)

On the Last Iterate Convergence of Momentum Methods

13 February 2021
Xiaoyun Li
Mingrui Liu
Francesco Orabona
ArXiv (abs)PDFHTML

Papers citing "On the Last Iterate Convergence of Momentum Methods"

12 / 12 papers shown
Title
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
Xianliang Li
Jun Luo
Zhiwei Zheng
Hanxiao Wang
Li Luo
Lingkun Wen
Linlong Wu
Sheng Xu
171
0
0
29 Nov 2024
A new regret analysis for Adam-type algorithms
A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu
Yura Malitsky
P. Mertikopoulos
Volkan Cevher
ODL
77
43
0
21 Mar 2020
Momentum-Based Variance Reduction in Non-Convex SGD
Momentum-Based Variance Reduction in Non-Convex SGD
Ashok Cutkosky
Francesco Orabona
ODL
96
410
0
24 May 2019
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Liangchen Luo
Yuanhao Xiong
Yan Liu
Xu Sun
ODL
91
602
0
26 Feb 2019
Tight Analyses for Non-Smooth Stochastic Gradient Descent
Tight Analyses for Non-Smooth Stochastic Gradient Descent
Nicholas J. A. Harvey
Christopher Liaw
Y. Plan
Sikander Randhawa
65
138
0
13 Dec 2018
Fast and Faster Convergence of SGD for Over-Parameterized Models and an
  Accelerated Perceptron
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark Schmidt
95
301
0
16 Oct 2018
On the Convergence of Stochastic Gradient Descent with Adaptive
  Stepsizes
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
76
298
0
21 May 2018
On the insufficiency of existing momentum schemes for Stochastic
  Optimization
On the insufficiency of existing momentum schemes for Stochastic Optimization
Rahul Kidambi
Praneeth Netrapalli
Prateek Jain
Sham Kakade
ODL
90
120
0
15 Mar 2018
The Power of Interpolation: Understanding the Effectiveness of SGD in
  Modern Over-parametrized Learning
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
Siyuan Ma
Raef Bassily
M. Belkin
92
291
0
18 Dec 2017
A Second-order Bound with Excess Losses
A Second-order Bound with Excess Losses
Pierre Gaillard
Gilles Stoltz
T. Erven
81
154
0
10 Feb 2014
Information-theoretic lower bounds on the oracle complexity of
  stochastic convex optimization
Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Alekh Agarwal
Peter L. Bartlett
Pradeep Ravikumar
Martin J. Wainwright
212
251
0
03 Sep 2010
Less Regret via Online Conditioning
Less Regret via Online Conditioning
Matthew J. Streeter
H. B. McMahan
ODL
101
66
0
25 Feb 2010
1