Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.08106
Cited By
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
18 August 2021
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation"
5 / 5 papers shown
Title
Normalized gradient flow optimization in the training of ReLU artificial neural networks
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
31
0
0
13 Jul 2022
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
ODL
24
58
0
02 Jun 2022
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
34
6
0
13 Dec 2021
Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
24
10
0
19 Mar 2021
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
1