ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.13711
  4. Cited By
Incorporating the Barzilai-Borwein Adaptive Step Size into Sugradient
  Methods for Deep Network Training

Incorporating the Barzilai-Borwein Adaptive Step Size into Sugradient Methods for Deep Network Training

27 May 2022
A. Robles-Kelly
A. Nazari
    ODL
ArXivPDFHTML

Papers citing "Incorporating the Barzilai-Borwein Adaptive Step Size into Sugradient Methods for Deep Network Training"

5 / 5 papers shown
Title
On the Convergence of Adam and Beyond
On the Convergence of Adam and Beyond
Sashank J. Reddi
Satyen Kale
Surinder Kumar
90
2,499
0
19 Apr 2019
On the Influence of Momentum Acceleration on Online Learning
On the Influence of Momentum Acceleration on Online Learning
Kun Yuan
Bicheng Ying
Ali H. Sayed
62
58
0
14 Mar 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.8K
150,039
0
22 Dec 2014
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
150
6,624
0
22 Dec 2012
Advances in Optimizing Recurrent Networks
Advances in Optimizing Recurrent Networks
Yoshua Bengio
Nicolas Boulanger-Lewandowski
Razvan Pascanu
ODL
102
522
0
04 Dec 2012
1