ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1612.04010
  4. Cited By
An empirical analysis of the optimization of deep network loss surfaces

An empirical analysis of the optimization of deep network loss surfaces

13 December 2016
Daniel Jiwoong Im
Michael Tao
K. Branson
    ODL
ArXivPDFHTML

Papers citing "An empirical analysis of the optimization of deep network loss surfaces"

12 / 12 papers shown
Title
On-the-Fly Guidance Training for Medical Image Registration
On-the-Fly Guidance Training for Medical Image Registration
Yuelin Xin
Yicheng Chen
Shengxiang Ji
Kun Han
Xiaohui Xie
OOD
35
1
0
29 Aug 2023
Bidirectional Looking with A Novel Double Exponential Moving Average to
  Adaptive and Non-adaptive Momentum Optimizers
Bidirectional Looking with A Novel Double Exponential Moving Average to Adaptive and Non-adaptive Momentum Optimizers
Yineng Chen
Z. Li
Lefei Zhang
Bo Du
Hai Zhao
33
4
0
02 Jul 2023
Analyzing Monotonic Linear Interpolation in Neural Network Loss
  Landscapes
Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
James Lucas
Juhan Bae
Michael Ruogu Zhang
Stanislav Fort
R. Zemel
Roger C. Grosse
MoMe
164
28
0
22 Apr 2021
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
37
307
0
15 Feb 2019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Sai Praneeth Karimireddy
Quentin Rebjock
Sebastian U. Stich
Martin Jaggi
21
493
0
28 Jan 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
35
191
0
02 Oct 2018
Interpreting Adversarial Robustness: A View from Decision Surface in
  Input Space
Interpreting Adversarial Robustness: A View from Decision Surface in Input Space
Fuxun Yu
Chenchen Liu
Yanzhi Wang
Liang Zhao
Xiang Chen
AAML
OOD
31
27
0
29 Sep 2018
How Does Batch Normalization Help Optimization?
How Does Batch Normalization Help Optimization?
Shibani Santurkar
Dimitris Tsipras
Andrew Ilyas
A. Madry
ODL
32
1,522
0
29 May 2018
Quantitatively Evaluating GANs With Divergences Proposed for Training
Quantitatively Evaluating GANs With Divergences Proposed for Training
Daniel Jiwoong Im
He Ma
Graham W. Taylor
K. Branson
EGVM
19
69
0
02 Mar 2018
Visualizing the Loss Landscape of Neural Nets
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
106
1,844
0
28 Dec 2017
Sharp Minima Can Generalize For Deep Nets
Sharp Minima Can Generalize For Deep Nets
Laurent Dinh
Razvan Pascanu
Samy Bengio
Yoshua Bengio
ODL
46
757
0
15 Mar 2017
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
1