Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.01291
Cited By
Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
3 March 2020
Arnulf Jentzen
Timo Welti
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation"
7 / 7 papers shown
Title
Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
Tim De Ryck
Siddhartha Mishra
PINN
15
10
0
15 Jul 2022
On the approximation of functions by tanh neural networks
Tim De Ryck
S. Lanthaler
Siddhartha Mishra
21
137
0
18 Apr 2021
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
Arnulf Jentzen
Adrian Riekert
MLT
32
13
0
01 Apr 2021
Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases
Arnulf Jentzen
T. Kröger
ODL
28
7
0
23 Feb 2021
Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Steffen Dereich
Sebastian Kassing
34
27
0
16 Feb 2021
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
Solving the Kolmogorov PDE by means of deep learning
C. Beck
S. Becker
Philipp Grohs
Nor Jaafari
Arnulf Jentzen
6
91
0
01 Jun 2018
1