Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1708.07120
Cited By
Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates
23 August 2017
L. Smith
Nicholay Topin
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates"
25 / 125 papers shown
Title
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
40
162
0
03 Jul 2020
Hippo: Taming Hyper-parameter Optimization of Deep Learning with Stage Trees
Ahnjae Shin
Do Yoon Kim
Joo Seong Jeong
Byung-Gon Chun
20
4
0
22 Jun 2020
MOSQUITO-NET: A deep learning based CADx system for malaria diagnosis along with model interpretation using GradCam and class activation maps
Aayush Kumar
Sanat B Singh
S. Satapathy
M. Rout
4
15
0
17 Jun 2020
Monotone operator equilibrium networks
Ezra Winston
J. Zico Kolter
32
130
0
15 Jun 2020
Parsimonious Computing: A Minority Training Regime for Effective Prediction in Large Microarray Expression Data Sets
Shailesh Sridhar
Snehanshu Saha
Azhar Shaikh
Rahul Yedida
S. Saha
9
4
0
18 May 2020
F2A2: Flexible Fully-decentralized Approximate Actor-critic for Cooperative Multi-agent Reinforcement Learning
Wenhao Li
Bo Jin
Xiangfeng Wang
Junchi Yan
H. Zha
25
21
0
17 Apr 2020
Editable Neural Networks
A. Sinitsin
Vsevolod Plokhotnyuk
Dmitriy V. Pyrkin
Sergei Popov
Artem Babenko
KELM
68
175
0
01 Apr 2020
The Two Regimes of Deep Network Training
Guillaume Leclerc
A. Madry
19
45
0
24 Feb 2020
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
99
1,159
0
12 Jan 2020
Object as Hotspots: An Anchor-Free 3D Object Detection Approach via Firing of Hotspots
Qi Chen
Lin Sun
Zhixin Wang
Kui Jia
Alan Yuille
3DPC
170
169
0
30 Dec 2019
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
27
168
0
19 Dec 2019
ResNetX: a more disordered and deeper network architecture
Wenfeng Feng
Xin Zhang
Guangpeng Zhao
40
2
0
18 Dec 2019
Semi-Supervised Learning for Cancer Detection of Lymph Node Metastases
Amit Kumar Jaiswal
Ivan Panshin
D. Shulkin
Nagender Aneja
Samuel Abramov
SSL
MedIm
25
23
0
23 Jun 2019
AI Feynman: a Physics-Inspired Method for Symbolic Regression
S. Udrescu
Max Tegmark
45
852
0
27 May 2019
Accurate Visual Localization for Automotive Applications
Eli Brosh
Matan Friedmann
I. Kadar
Lev Yitzhak Lavy
Elad Levi
S. Rippa
Y. Lempert
Bruno Fernandez-Ruiz
Roei Herzig
Trevor Darrell
36
24
0
01 May 2019
Forget the Learning Rate, Decay Loss
Jiakai Wei
22
9
0
27 Apr 2019
Learning representations of irregular particle-detector geometry with distance-weighted graph networks
S. Qasim
J. Kieseler
Y. Iiyama
M. Pierini
35
135
0
21 Feb 2019
Image Classification at Supercomputer Scale
Chris Ying
Sameer Kumar
Dehao Chen
Tao Wang
Youlong Cheng
VLM
19
122
0
16 Nov 2018
Robust Learning of Tactile Force Estimation through Robot Interaction
Balakumar Sundaralingam
Alexander Lambert
Ankur Handa
Byron Boots
Tucker Hermans
Stan Birchfield
Nathan D. Ratliff
Dieter Fox
OOD
14
59
0
15 Oct 2018
Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate
Mor Shpigel Nacson
Nathan Srebro
Daniel Soudry
FedML
MLT
32
97
0
05 Jun 2018
Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark
Cody Coleman
Daniel Kang
Deepak Narayanan
Luigi Nardi
Tian Zhao
Jian Zhang
Peter Bailis
K. Olukotun
Christopher Ré
Matei A. Zaharia
13
117
0
04 Jun 2018
Understanding Batch Normalization
Johan Bjorck
Carla P. Gomes
B. Selman
Kilian Q. Weinberger
23
594
0
01 Jun 2018
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
208
1,020
0
26 Mar 2018
A Walk with SGD
Chen Xing
Devansh Arpit
Christos Tsirigotis
Yoshua Bengio
27
118
0
24 Feb 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
Previous
1
2
3