ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.01705
  4. Cited By
Learning to Accelerate by the Methods of Step-size Planning

Learning to Accelerate by the Methods of Step-size Planning

1 April 2022
Hengshuai Yao
ArXivPDFHTML

Papers citing "Learning to Accelerate by the Methods of Step-size Planning"

5 / 5 papers shown
Title
A Simple Convergence Proof of Adam and Adagrad
A Simple Convergence Proof of Adam and Adagrad
Alexandre Défossez
Léon Bottou
Francis R. Bach
Nicolas Usunier
56
143
0
05 Mar 2020
An Introduction to Deep Reinforcement Learning
An Introduction to Deep Reinforcement Learning
Vincent François-Lavet
Peter Henderson
Riashat Islam
Marc G. Bellemare
Joelle Pineau
OffRL
AI4CE
80
1,231
0
30 Nov 2018
L4: Practical loss-based stepsize adaptation for deep learning
L4: Practical loss-based stepsize adaptation for deep learning
Michal Rolínek
Georg Martius
ODL
36
63
0
14 Feb 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
290
11,681
0
09 Mar 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
278
2,888
0
15 Sep 2016
1