ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.04479
  4. Cited By
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation

9 July 2021
Arnulf Jentzen
Adrian Riekert
ArXivPDFHTML

Papers citing "Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation"

11 / 11 papers shown
Title
Convergence of Shallow ReLU Networks on Weakly Interacting Data
Convergence of Shallow ReLU Networks on Weakly Interacting Data
Léo Dana
Francis R. Bach
Loucas Pillaud-Vivien
MLT
62
1
0
24 Feb 2025
Predicting the fatigue life of asphalt concrete using neural networks
Predicting the fatigue life of asphalt concrete using neural networks
Jakub Houlík
Jan Valentin
Vaclav Nezerka
AI4CE
24
0
0
03 Jun 2024
Non-convergence to global minimizers for Adam and stochastic gradient
  descent optimization and constructions of local minimizers in the training of
  artificial neural networks
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
Arnulf Jentzen
Adrian Riekert
38
4
0
07 Feb 2024
A convergence result of a continuous model of deep learning via
  Łojasiewicz--Simon inequality
A convergence result of a continuous model of deep learning via Łojasiewicz--Simon inequality
Noboru Isobe
16
2
0
26 Nov 2023
Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias
  for Correlated Inputs
Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias for Correlated Inputs
D. Chistikov
Matthias Englert
R. Lazic
MLT
36
12
0
10 Jun 2023
Operator theory, kernels, and Feedforward Neural Networks
Operator theory, kernels, and Feedforward Neural Networks
P. Jorgensen
Myung-Sin Song
James Tian
35
0
0
03 Jan 2023
Normalized gradient flow optimization in the training of ReLU artificial
  neural networks
Normalized gradient flow optimization in the training of ReLU artificial neural networks
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
31
0
0
13 Jul 2022
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
34
6
0
13 Dec 2021
Existence, uniqueness, and convergence rates for gradient flows in the
  training of artificial neural networks with ReLU activation
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
36
12
0
18 Aug 2021
A proof of convergence for the gradient descent optimization method with
  random initializations in the training of neural networks with ReLU
  activation for piecewise linear target functions
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
Arnulf Jentzen
Adrian Riekert
33
13
0
10 Aug 2021
Landscape analysis for shallow neural networks: complete classification
  of critical points for affine target functions
Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
24
10
0
19 Mar 2021
1