ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11991
  4. Cited By
Parabolic Approximation Line Search for DNNs

Parabolic Approximation Line Search for DNNs

28 March 2019
Max Mutschler
A. Zell
    ODL
ArXivPDFHTML

Papers citing "Parabolic Approximation Line Search for DNNs"

33 / 33 papers shown
Title
Using a one dimensional parabolic model of the full-batch loss to
  estimate learning rates during training
Using a one dimensional parabolic model of the full-batch loss to estimate learning rates during training
Max Mutschler
Kevin Laube
A. Zell
ODL
17
1
0
31 Aug 2021
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity
A. Davtyan
Sepehr Sameni
L. Cerkezi
Givi Meishvili
Adam Bielski
Paolo Favaro
ODL
93
2
0
07 Jul 2021
Empirically explaining SGD from a line search perspective
Empirically explaining SGD from a line search perspective
Max Mutschler
A. Zell
ODL
LRM
36
4
0
31 Mar 2021
Empirical study towards understanding line search approximations for
  training neural networks
Empirical study towards understanding line search approximations for training neural networks
Younghwan Chae
D. Wilke
100
11
0
15 Sep 2019
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for
  Stochastic Gradient Descent
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
Tomer Lancewicki
Selçuk Köprü
19
5
0
20 Aug 2019
On the Variance of the Adaptive Learning Rate and Beyond
On the Variance of the Adaptive Learning Rate and Beyond
Liyuan Liu
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
Jiawei Han
ODL
152
1,894
0
08 Aug 2019
Training Neural Networks for and by Interpolation
Training Neural Networks for and by Interpolation
Leonard Berrada
Andrew Zisserman
M. P. Kumar
3DH
27
62
0
13 Jun 2019
Large Scale Structure of Neural Network Loss Landscapes
Large Scale Structure of Neural Network Loss Landscapes
Stanislav Fort
Stanislaw Jastrzebski
41
83
0
11 Jun 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
109
17,950
0
28 May 2019
Painless Stochastic Gradient: Interpolation, Line-Search, and
  Convergence Rates
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
Sharan Vaswani
Aaron Mishkin
I. Laradji
Mark Schmidt
Gauthier Gidel
Simon Lacoste-Julien
ODL
72
208
0
24 May 2019
On the Convergence of Adam and Beyond
On the Convergence of Adam and Beyond
Sashank J. Reddi
Satyen Kale
Surinder Kumar
55
2,482
0
19 Apr 2019
Gradient-only line searches: An Alternative to Probabilistic Line
  Searches
Gradient-only line searches: An Alternative to Probabilistic Line Searches
D. Kafka
D. Wilke
ODL
52
14
0
22 Mar 2019
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Liangchen Luo
Yuanhao Xiong
Yan Liu
Xu Sun
ODL
46
600
0
26 Feb 2019
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Haowei He
Gao Huang
Yang Yuan
ODL
MLT
54
149
0
02 Feb 2019
Averaging Weights Leads to Wider Optima and Better Generalization
Averaging Weights Leads to Wider Optima and Better Generalization
Pavel Izmailov
Dmitrii Podoprikhin
T. Garipov
Dmitry Vetrov
A. Wilson
FedML
MoMe
93
1,643
0
14 Mar 2018
A Walk with SGD
A Walk with SGD
Chen Xing
Devansh Arpit
Christos Tsirigotis
Yoshua Bengio
68
118
0
24 Feb 2018
L4: Practical loss-based stepsize adaptation for deep learning
L4: Practical loss-based stepsize adaptation for deep learning
Michal Rolínek
Georg Martius
ODL
81
64
0
14 Feb 2018
ShakeDrop Regularization for Deep Residual Learning
ShakeDrop Regularization for Deep Residual Learning
Yoshihiro Yamada
Masakazu Iwamura
Takuya Akiba
K. Kise
55
162
0
07 Feb 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
148
19,124
0
13 Jan 2018
Visualizing the Loss Landscape of Neural Nets
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
234
1,873
0
28 Dec 2017
Practical Gauss-Newton Optimisation for Deep Learning
Practical Gauss-Newton Optimisation for Deep Learning
Aleksandar Botev
H. Ritter
David Barber
ODL
31
228
0
12 Jun 2017
Shake-Shake regularization
Shake-Shake regularization
Xavier Gastaldi
3DPC
BDL
OOD
60
380
0
21 May 2017
Online Learning Rate Adaptation with Hypergradient Descent
Online Learning Rate Adaptation with Hypergradient Descent
A. G. Baydin
R. Cornish
David Martínez-Rubio
Mark Schmidt
Frank Wood
ODL
49
247
0
14 Mar 2017
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De
A. Yadav
David Jacobs
Tom Goldstein
ODL
120
62
0
18 Oct 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
631
36,599
0
25 Aug 2016
Neither Quick Nor Proper -- Evaluation of QuickProp for Learning Deep
  Neural Networks
Neither Quick Nor Proper -- Evaluation of QuickProp for Learning Deep Neural Networks
C. Brust
Sven Sickert
Marcel Simon
E. Rodner
Joachim Denzler
SSeg
VLM
11
3
0
14 Jun 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Optimizing Neural Networks with Kronecker-factored Approximate Curvature
Optimizing Neural Networks with Kronecker-factored Approximate Curvature
James Martens
Roger C. Grosse
ODL
69
999
0
19 Mar 2015
Probabilistic Line Searches for Stochastic Optimization
Probabilistic Line Searches for Stochastic Optimization
Maren Mahsereci
Philipp Hennig
ODL
47
126
0
10 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
842
149,474
0
22 Dec 2014
Qualitatively characterizing neural network optimization problems
Qualitatively characterizing neural network optimization problems
Ian Goodfellow
Oriol Vinyals
Andrew M. Saxe
ODL
83
519
0
19 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
943
99,991
0
04 Sep 2014
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
113
6,619
0
22 Dec 2012
1