ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.15317
  4. Cited By
AutoDrop: Training Deep Learning Models with Automatic Learning Rate
  Drop

AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop

30 November 2021
Yunfei Teng
Jing Wang
A. Choromańska
ArXivPDFHTML

Papers citing "AutoDrop: Training Deep Learning Models with Automatic Learning Rate Drop"

5 / 5 papers shown
Title
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on
  the Fly
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Yuchen Jin
Dinesh Manocha
Liangyu Zhao
Yibo Zhu
Chuanxiong Guo
Marco Canini
Arvind Krishnamurthy
37
18
0
22 May 2021
Provably Efficient Online Hyperparameter Optimization with
  Population-Based Bandits
Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits
Jack Parker-Holder
Vu Nguyen
Stephen J. Roberts
OffRL
75
83
0
06 Feb 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,826
0
17 Sep 2019
A disciplined approach to neural network hyper-parameters: Part 1 --
  learning rate, batch size, momentum, and weight decay
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
208
1,020
0
26 Mar 2018
Forward and Reverse Gradient-Based Hyperparameter Optimization
Forward and Reverse Gradient-Based Hyperparameter Optimization
Luca Franceschi
Michele Donini
P. Frasconi
Massimiliano Pontil
133
406
0
06 Mar 2017
1