ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.14872
  4. Cited By
Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive
  Step Size

Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size

30 December 2021
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
    ODL
ArXivPDFHTML

Papers citing "Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size"

13 / 13 papers shown
Title
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z. Yao
A. Gholami
Sheng Shen
Mustafa Mustafa
Kurt Keutzer
Michael W. Mahoney
ODL
73
281
0
01 Jun 2020
Scalable Second Order Optimization for Deep Learning
Scalable Second Order Optimization for Deep Learning
Rohan Anil
Vineet Gupta
Tomer Koren
Kevin Regan
Y. Singer
ODL
36
29
0
20 Feb 2020
Linear Convergence of Adaptive Stochastic Gradient Descent
Linear Convergence of Adaptive Stochastic Gradient Descent
Yuege Xie
Xiaoxia Wu
Rachel A. Ward
45
44
0
28 Aug 2019
Escaping Saddle Points with Adaptive Gradient Methods
Escaping Saddle Points with Adaptive Gradient Methods
Matthew Staib
Sashank J. Reddi
Satyen Kale
Sanjiv Kumar
S. Sra
ODL
39
74
0
26 Jan 2019
On exponential convergence of SGD in non-convex over-parametrized
  learning
On exponential convergence of SGD in non-convex over-parametrized learning
Xinhai Liu
M. Belkin
Yu-Shen Liu
59
101
0
06 Nov 2018
Fast and Faster Convergence of SGD for Over-Parameterized Models and an
  Accelerated Perceptron
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark Schmidt
50
297
0
16 Oct 2018
signSGD: Compressed Optimisation for Non-Convex Problems
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein
Yu Wang
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
ODL
78
1,026
0
13 Feb 2018
Diving into the shallows: a computational perspective on large-scale
  shallow learning
Diving into the shallows: a computational perspective on large-scale shallow learning
Siyuan Ma
M. Belkin
51
78
0
30 Mar 2017
IQN: An Incremental Quasi-Newton Method with Local Superlinear
  Convergence Rate
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
Aryan Mokhtari
Mark Eisen
Alejandro Ribeiro
57
74
0
02 Feb 2017
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
221
1,208
0
16 Aug 2016
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
Tianlin Li
Shiqian Ma
D. Goldfarb
Wen Liu
42
177
0
05 Jul 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
857
149,474
0
22 Dec 2014
Fast large-scale optimization by unifying stochastic gradient and
  quasi-Newton methods
Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods
Jascha Narain Sohl-Dickstein
Ben Poole
Surya Ganguli
ODL
93
124
0
09 Nov 2013
1