Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2007.04596
Cited By
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK
9 July 2020
Yuanzhi Li
Tengyu Ma
Hongyang R. Zhang
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK"
6 / 56 papers shown
Title
No bad local minima: Data independent training error guarantees for multilayer neural networks
Daniel Soudry
Y. Carmon
123
235
0
26 May 2016
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
169
922
0
23 May 2016
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
112
343
0
18 Feb 2016
When Are Nonconvex Problems Not Scary?
Ju Sun
Qing Qu
John N. Wright
64
166
0
21 Oct 2015
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
127
1,056
0
06 Mar 2015
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
359
15,825
0
12 Nov 2013
Previous
1
2