ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02263
  4. Cited By
Convergence and Dynamical Behavior of the ADAM Algorithm for Non-Convex
  Stochastic Optimization

Convergence and Dynamical Behavior of the ADAM Algorithm for Non-Convex Stochastic Optimization

4 October 2018
Anas Barakat
Pascal Bianchi
ArXivPDFHTML

Papers citing "Convergence and Dynamical Behavior of the ADAM Algorithm for Non-Convex Stochastic Optimization"

15 / 15 papers shown
Title
Diagnosis of Patients with Viral, Bacterial, and Non-Pneumonia Based on Chest X-Ray Images Using Convolutional Neural Networks
Carlos Arizmendi
Jorge Pinto
Alejandro Arboleda
Hernando Gonzalez
71
1
0
03 Mar 2025
On the Convergence of Adam and Beyond
On the Convergence of Adam and Beyond
Sashank J. Reddi
Satyen Kale
Surinder Kumar
55
2,482
0
19 Apr 2019
A general system of differential equations to model first order adaptive
  algorithms
A general system of differential equations to model first order adaptive algorithms
André Belotto da Silva
Maxime Gazeau
39
33
0
31 Oct 2018
Understanding the Acceleration Phenomenon via High-Resolution
  Differential Equations
Understanding the Acceleration Phenomenon via High-Resolution Differential Equations
Bin Shi
S. Du
Michael I. Jordan
Weijie J. Su
42
256
0
21 Oct 2018
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex
  Optimization
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
Xiangyi Chen
Sijia Liu
Ruoyu Sun
Mingyi Hong
46
322
0
08 Aug 2018
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes
Rachel A. Ward
Xiaoxia Wu
Léon Bottou
ODL
50
365
0
05 Jun 2018
Stochastic subgradient method converges on tame functions
Stochastic subgradient method converges on tame functions
Damek Davis
Dmitriy Drusvyatskiy
Sham Kakade
Jason D. Lee
40
251
0
20 Apr 2018
signSGD: Compressed Optimisation for Non-Convex Problems
signSGD: Compressed Optimisation for Non-Convex Problems
Jeremy Bernstein
Yu Wang
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
ODL
78
1,026
0
13 Feb 2018
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Mahesh Chandra Mukkamala
Matthias Hein
ODL
49
258
0
17 Jun 2017
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic
  Gradients
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients
Lukas Balles
Philipp Hennig
64
166
0
22 May 2017
Stochastic Heavy Ball
Stochastic Heavy Ball
S. Gadat
Fabien Panloup
Sofiane Saadane
80
102
0
14 Sep 2016
A Variational Perspective on Accelerated Methods in Optimization
A Variational Perspective on Accelerated Methods in Optimization
Andre Wibisono
Ashia Wilson
Michael I. Jordan
80
572
0
14 Mar 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
149
1,161
0
04 Mar 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
882
149,474
0
22 Dec 2014
No More Pesky Learning Rates
No More Pesky Learning Rates
Tom Schaul
Sixin Zhang
Yann LeCun
103
477
0
06 Jun 2012
1