ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.10268
  4. Cited By
Loss Surface Modality of Feed-Forward Neural Network Architectures
v1v2 (latest)

Loss Surface Modality of Feed-Forward Neural Network Architectures

24 May 2019
Anna Sergeevna Bosman
A. Engelbrecht
Mardé Helbig
ArXiv (abs)PDFHTML

Papers citing "Loss Surface Modality of Feed-Forward Neural Network Architectures"

10 / 10 papers shown
Title
Visualising Basins of Attraction for the Cross-Entropy and the Squared
  Error Neural Network Loss Functions
Visualising Basins of Attraction for the Cross-Entropy and the Squared Error Neural Network Loss Functions
Anna Sergeevna Bosman
A. Engelbrecht
Mardé Helbig
67
77
0
08 Jan 2019
Deep, Skinny Neural Networks are not Universal Approximators
Deep, Skinny Neural Networks are not Universal Approximators
Jesse Johnson
54
67
0
30 Sep 2018
The Loss Surface of XOR Artificial Neural Networks
The Loss Surface of XOR Artificial Neural Networks
D. Mehta
Xiaojun Zhao
Edgar A. Bernal
D. Wales
145
19
0
06 Apr 2018
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Levent Sagun
Utku Evci
V. U. Güney
Yann N. Dauphin
Léon Bottou
56
419
0
14 Jun 2017
The loss surface of deep and wide neural networks
The loss surface of deep and wide neural networks
Quynh N. Nguyen
Matthias Hein
ODL
175
285
0
26 Apr 2017
Depth Creates No Bad Local Minima
Depth Creates No Bad Local Minima
Haihao Lu
Kenji Kawaguchi
ODLFAtt
78
121
0
27 Feb 2017
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Pratik Chaudhari
A. Choromańska
Stefano Soatto
Yann LeCun
Carlo Baldassi
C. Borgs
J. Chayes
Levent Sagun
R. Zecchina
ODL
96
775
0
06 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
433
2,946
0
15 Sep 2016
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
307
5,536
0
23 Nov 2015
Explorations on high dimensional landscapes
Explorations on high dimensional landscapes
Levent Sagun
V. U. Güney
Gerard Ben Arous
Yann LeCun
71
65
0
20 Dec 2014
1