ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.05643
  4. Cited By
On minimal representations of shallow ReLU networks

On minimal representations of shallow ReLU networks

12 August 2021
Steffen Dereich
Sebastian Kassing
    FAtt
ArXivPDFHTML

Papers citing "On minimal representations of shallow ReLU networks"

7 / 7 papers shown
Title
Stochastic gradient descent with noise of machine learning type. Part
  II: Continuous time analysis
Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysis
Stephan Wojtowytsch
63
34
0
04 Jun 2021
Central limit theorems for stochastic gradient descent with averaging
  for stable manifolds
Central limit theorems for stochastic gradient descent with averaging for stable manifolds
Steffen Dereich
Sebastian Kassing
87
12
0
19 Dec 2019
Convergence rates for the stochastic gradient descent method for
  non-convex objective functions
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
66
101
0
02 Apr 2019
Reducing Parameter Space for Neural Network Training
Reducing Parameter Space for Neural Network Training
Tong Qin
Ling Zhou
D. Xiu
28
6
0
22 May 2018
The loss landscape of overparameterized neural networks
The loss landscape of overparameterized neural networks
Y. Cooper
43
75
0
26 Apr 2018
Universal Function Approximation by Deep Neural Nets with Bounded Width
  and ReLU Activations
Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
Boris Hanin
57
354
0
09 Aug 2017
Understanding Deep Neural Networks with Rectified Linear Units
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
143
640
0
04 Nov 2016
1