ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.10922
  4. Cited By
Landscape analysis for shallow neural networks: complete classification
  of critical points for affine target functions

Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions

19 March 2021
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
ArXivPDFHTML

Papers citing "Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions"

8 / 8 papers shown
Title
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Zhengqing Wu
Berfin Simsek
Francois Ged
ODL
72
0
0
08 Feb 2024
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Arnulf Jentzen
Adrian Riekert
45
23
0
09 Jul 2021
A proof of convergence for gradient descent in the training of
  artificial neural networks for constant target functions
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
Patrick Cheridito
Arnulf Jentzen
Adrian Riekert
Florian Rossmannek
40
24
0
19 Feb 2021
On the Banach spaces associated with multi-layer ReLU networks: Function
  representation, approximation theory and gradient descent dynamics
On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics
E. Weinan
Stephan Wojtowytsch
MLT
34
53
0
30 Jul 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
66
336
0
11 Feb 2020
Convergence rates for the stochastic gradient descent method for
  non-convex objective functions
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
56
101
0
02 Apr 2019
On the Power of Over-parametrization in Neural Networks with Quadratic
  Activation
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
S. Du
Jason D. Lee
105
271
0
03 Mar 2018
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
116
417
0
16 Jul 2017
1