Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.15317
Cited By
AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop
30 November 2021
Yunfei Teng
Jing Wang
A. Choromańska
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop"
5 / 5 papers shown
Title
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Yuchen Jin
Dinesh Manocha
Liangyu Zhao
Yibo Zhu
Chuanxiong Guo
Marco Canini
Arvind Krishnamurthy
37
18
0
22 May 2021
Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits
Jack Parker-Holder
Vu Nguyen
Stephen J. Roberts
OffRL
75
83
0
06 Feb 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,826
0
17 Sep 2019
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
208
1,020
0
26 Mar 2018
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
133
406
0
06 Mar 2017
1