Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1702.08580
Cited By
Depth Creates No Bad Local Minima
27 February 2017
Haihao Lu
Kenji Kawaguchi
ODL
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Depth Creates No Bad Local Minima"
21 / 21 papers shown
Title
System Identification and Control Using Lyapunov-Based Deep Neural Networks without Persistent Excitation: A Concurrent Learning Approach
Rebecca G. Hart
Omkar Sudhir Patil
Zachary I. Bell
Warren E. Dixon
14
0
0
15 May 2025
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Ziang Chen
Rong Ge
MLT
61
1
0
10 Jan 2025
Function Space and Critical Points of Linear Convolutional Networks
Kathlén Kohn
Guido Montúfar
Vahid Shahverdi
Matthew Trager
26
11
0
12 Apr 2023
Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss
Pierre Bréchet
Katerina Papagiannouli
Jing An
Guido Montúfar
33
3
0
06 Mar 2023
When Expressivity Meets Trainability: Fewer than
n
n
n
Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
29
10
0
21 Oct 2022
Exact Solutions of a Deep Linear Network
Liu Ziyin
Botao Li
Xiangmin Meng
ODL
19
21
0
10 Feb 2022
The loss landscape of deep linear neural networks: a second-order analysis
El Mehdi Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
24
9
0
28 Jul 2021
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
25
168
0
19 Dec 2019
Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Johanni Brea
Berfin Simsek
Bernd Illing
W. Gerstner
23
55
0
05 Jul 2019
Interpretable Few-Shot Learning via Linear Distillation
Arip Asadulaev
Igor Kuznetsov
Andrey Filchenkov
FedML
FAtt
11
1
0
13 Jun 2019
Width Provably Matters in Optimization for Deep Linear Neural Networks
S. Du
Wei Hu
21
94
0
24 Jan 2019
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Henning Petzka
C. Sminchisescu
29
9
0
16 Dec 2018
Gradient descent aligns the layers of deep linear networks
Ziwei Ji
Matus Telgarsky
30
248
0
04 Oct 2018
Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks
Ohad Shamir
35
45
0
23 Sep 2018
Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
Hongyang R. Zhang
Junru Shao
Ruslan Salakhutdinov
39
14
0
06 Jun 2018
Understanding Batch Normalization
Johan Bjorck
Carla P. Gomes
B. Selman
Kilian Q. Weinberger
21
593
0
01 Jun 2018
Mad Max: Affine Spline Insights into Deep Learning
Randall Balestriero
Richard Baraniuk
AI4CE
31
78
0
17 May 2018
The Global Optimization Geometry of Shallow Linear Neural Networks
Zhihui Zhu
Daniel Soudry
Yonina C. Eldar
M. Wakin
ODL
18
36
0
13 May 2018
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
106
1,848
0
28 Dec 2017
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
128
117
0
08 Jul 2017
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
183
1,185
0
30 Nov 2014
1