ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.01291
  4. Cited By
Overall error analysis for the training of deep neural networks via
  stochastic gradient descent with random initialisation

Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation

3 March 2020
Arnulf Jentzen
Timo Welti
ArXivPDFHTML

Papers citing "Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation"

7 / 7 papers shown
Title
Error analysis for deep neural network approximations of parametric
  hyperbolic conservation laws
Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
Tim De Ryck
Siddhartha Mishra
PINN
15
10
0
15 Jul 2022
On the approximation of functions by tanh neural networks
On the approximation of functions by tanh neural networks
Tim De Ryck
S. Lanthaler
Siddhartha Mishra
21
137
0
18 Apr 2021
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
Arnulf Jentzen
Adrian Riekert
MLT
32
13
0
01 Apr 2021
Convergence rates for gradient descent in the training of
  overparameterized artificial neural networks with biases
Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases
Arnulf Jentzen
T. Kröger
ODL
28
7
0
23 Feb 2021
Convergence of stochastic gradient descent schemes for
  Lojasiewicz-landscapes
Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Steffen Dereich
Sebastian Kassing
34
27
0
16 Feb 2021
Non-convergence of stochastic gradient descent in the training of deep
  neural networks
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
Solving the Kolmogorov PDE by means of deep learning
Solving the Kolmogorov PDE by means of deep learning
C. Beck
S. Becker
Philipp Grohs
Nor Jaafari
Arnulf Jentzen
8
91
0
01 Jun 2018
1