ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6615
  4. Cited By
Explorations on high dimensional landscapes
v1v2v3v4 (latest)

Explorations on high dimensional landscapes

20 December 2014
Levent Sagun
V. U. Güney
Gerard Ben Arous
Yann LeCun
ArXiv (abs)PDFHTML

Papers citing "Explorations on high dimensional landscapes"

31 / 31 papers shown
Title
The Persistence of Neural Collapse Despite Low-Rank Bias: An Analytic
  Perspective Through Unconstrained Features
The Persistence of Neural Collapse Despite Low-Rank Bias: An Analytic Perspective Through Unconstrained Features
Connall Garrod
Jonathan P. Keating
65
4
0
30 Oct 2024
Universal characteristics of deep neural network loss surfaces from
  random matrix theory
Universal characteristics of deep neural network loss surfaces from random matrix theory
Nicholas P. Baskerville
J. Keating
F. Mezzadri
J. Najnudel
Diego Granziol
54
4
0
17 May 2022
Exponentially Many Local Minima in Quantum Neural Networks
Exponentially Many Local Minima in Quantum Neural Networks
Xuchen You
Xiaodi Wu
130
54
0
06 Oct 2021
Geometry of the Loss Landscape in Overparameterized Neural Networks:
  Symmetries and Invariances
Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Berfin cSimcsek
François Ged
Arthur Jacot
Francesco Spadaro
Clément Hongler
W. Gerstner
Johanni Brea
AI4CE
87
102
0
25 May 2021
Appearance of Random Matrix Theory in Deep Learning
Appearance of Random Matrix Theory in Deep Learning
Nicholas P. Baskerville
Diego Granziol
J. Keating
77
11
0
12 Feb 2021
Optimizing Mode Connectivity via Neuron Alignment
Optimizing Mode Connectivity via Neuron Alignment
N. Joseph Tatro
Pin-Yu Chen
Payel Das
Igor Melnyk
P. Sattigeri
Rongjie Lai
MoMe
312
82
0
05 Sep 2020
The Loss Surfaces of Neural Networks with General Activation Functions
The Loss Surfaces of Neural Networks with General Activation Functions
Nicholas P. Baskerville
J. Keating
F. Mezzadri
J. Najnudel
ODLAI4CE
131
26
0
08 Apr 2020
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep
  Neural Networks
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks
Umut Simsekli
Mert Gurbuzbalaban
T. H. Nguyen
G. Richard
Levent Sagun
88
59
0
29 Nov 2019
Who is Afraid of Big Bad Minima? Analysis of Gradient-Flow in a Spiked
  Matrix-Tensor Model
Who is Afraid of Big Bad Minima? Analysis of Gradient-Flow in a Spiked Matrix-Tensor Model
Stefano Sarao Mannelli
Giulio Biroli
C. Cammarota
Florent Krzakala
Lenka Zdeborová
62
43
0
18 Jul 2019
Weight-space symmetry in deep networks gives rise to permutation
  saddles, connected by equal-loss valleys across the loss landscape
Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Johanni Brea
Berfin Simsek
Bernd Illing
W. Gerstner
102
58
0
05 Jul 2019
Loss Surface Modality of Feed-Forward Neural Network Architectures
Loss Surface Modality of Feed-Forward Neural Network Architectures
Anna Sergeevna Bosman
A. Engelbrecht
Mardé Helbig
43
9
0
24 May 2019
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural
  Networks
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks
Umut Simsekli
Levent Sagun
Mert Gurbuzbalaban
111
252
0
18 Jan 2019
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Henning Petzka
C. Sminchisescu
88
10
0
16 Dec 2018
The loss surface of deep linear networks viewed through the algebraic
  geometry lens
The loss surface of deep linear networks viewed through the algebraic geometry lens
D. Mehta
Tianran Chen
Tingting Tang
J. Hauenstein
ODL
86
32
0
17 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
131
201
0
02 Oct 2018
Trust-Region Algorithms for Training Responses: Machine Learning Methods
  Using Indefinite Hessian Approximations
Trust-Region Algorithms for Training Responses: Machine Learning Methods Using Indefinite Hessian Approximations
Jennifer B. Erway
J. Griffin
Roummel F. Marcia
Riadh Omheni
54
24
0
01 Jul 2018
The committee machine: Computational to statistical gaps in learning a
  two-layers neural network
The committee machine: Computational to statistical gaps in learning a two-layers neural network
Benjamin Aubin
Antoine Maillard
Jean Barbier
Florent Krzakala
N. Macris
Lenka Zdeborová
105
108
0
14 Jun 2018
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
147
123
0
02 May 2018
The Loss Surface of XOR Artificial Neural Networks
The Loss Surface of XOR Artificial Neural Networks
D. Mehta
Xiaojun Zhao
Edgar A. Bernal
D. Wales
153
19
0
06 Apr 2018
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Marco Baity-Jesi
Levent Sagun
Mario Geiger
S. Spigler
Gerard Ben Arous
C. Cammarota
Yann LeCun
Matthieu Wyart
Giulio Biroli
AI4CE
112
115
0
19 Mar 2018
Rethinking generalization requires revisiting old ideas: statistical
  mechanics approaches and complex learning behavior
Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior
Charles H. Martin
Michael W. Mahoney
AI4CE
74
64
0
26 Oct 2017
Deep Learning applied to Road Traffic Speed forecasting
Deep Learning applied to Road Traffic Speed forecasting
T. Epelbaum
Fabrice Gamboa
Jean-Michel Loubes
J. Martin
AI4TS
37
11
0
02 Oct 2017
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Levent Sagun
Utku Evci
V. U. Güney
Yann N. Dauphin
Léon Bottou
95
420
0
14 Jun 2017
Sharp Minima Can Generalize For Deep Nets
Sharp Minima Can Generalize For Deep Nets
Laurent Dinh
Razvan Pascanu
Samy Bengio
Yoshua Bengio
ODL
147
774
0
15 Mar 2017
Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond
Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond
Levent Sagun
Léon Bottou
Yann LeCun
UQCV
104
236
0
22 Nov 2016
Local minima in training of neural networks
Local minima in training of neural networks
G. Swirszcz
Wojciech M. Czarnecki
Razvan Pascanu
ODL
83
73
0
19 Nov 2016
Topology and Geometry of Half-Rectified Network Optimization
Topology and Geometry of Half-Rectified Network Optimization
C. Freeman
Joan Bruna
242
235
0
04 Nov 2016
On the Modeling of Error Functions as High Dimensional Landscapes for
  Weight Initialization in Learning Networks
On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks
Julius
Gopinath Mahale
Sumana T
C. S. Adityakrishna
18
1
0
20 Jul 2016
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
AdaNet: Adaptive Structural Learning of Artificial Neural Networks
Corinna Cortes
X. Gonzalvo
Vitaly Kuznetsov
M. Mohri
Scott Yang
93
285
0
05 Jul 2016
On the energy landscape of deep networks
On the energy landscape of deep networks
Pratik Chaudhari
Stefano Soatto
ODL
92
27
0
20 Nov 2015
Universal halting times in optimization and machine learning
Universal halting times in optimization and machine learning
Levent Sagun
T. Trogdon
Yann LeCun
BDL
44
9
0
19 Nov 2015
1