ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.04596
  4. Cited By
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK

Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK

9 July 2020
Yuanzhi Li
Tengyu Ma
Hongyang R. Zhang
    MLT
ArXivPDFHTML

Papers citing "Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK"

6 / 56 papers shown
Title
No bad local minima: Data independent training error guarantees for
  multilayer neural networks
No bad local minima: Data independent training error guarantees for multilayer neural networks
Daniel Soudry
Y. Carmon
123
235
0
26 May 2016
Deep Learning without Poor Local Minima
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
169
922
0
23 May 2016
Toward Deeper Understanding of Neural Networks: The Power of
  Initialization and a Dual View on Expressivity
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
112
343
0
18 Feb 2016
When Are Nonconvex Problems Not Scary?
When Are Nonconvex Problems Not Scary?
Ju Sun
Qing Qu
John N. Wright
64
166
0
21 Oct 2015
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
  Decomposition
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
127
1,056
0
06 Mar 2015
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
359
15,825
0
12 Nov 2013
Previous
12