ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.09477
  4. Cited By
The phase diagram of approximation rates for deep neural networks

The phase diagram of approximation rates for deep neural networks

22 June 2019
Dmitry Yarotsky
Anton Zhevnerchuk
ArXivPDFHTML

Papers citing "The phase diagram of approximation rates for deep neural networks"

32 / 32 papers shown
Title
Statistically guided deep learning
Statistically guided deep learning
Michael Kohler
A. Krzyżak
ODL
BDL
74
0
0
11 Apr 2025
Deep Kalman Filters Can Filter
Deep Kalman Filters Can Filter
Blanka Hovart
Anastasis Kratsios
Yannick Limmer
Xuwei Yang
45
1
0
31 Dec 2024
On the expressiveness and spectral bias of KANs
On the expressiveness and spectral bias of KANs
Yixuan Wang
Jonathan W. Siegel
Ziming Liu
Thomas Y. Hou
37
10
0
02 Oct 2024
On the optimal approximation of Sobolev and Besov functions using deep
  ReLU neural networks
On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks
Yunfei Yang
62
2
0
02 Sep 2024
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of
  Experts
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of Experts
Anastasis Kratsios
Haitz Sáez de Ocáriz Borde
Takashi Furuya
Marc T. Law
MoE
41
1
0
05 Feb 2024
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax
  Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes
Hyunouk Ko
Xiaoming Huo
30
1
0
08 Jan 2024
Deep Learning and Computational Physics (Lecture Notes)
Deep Learning and Computational Physics (Lecture Notes)
Deep Ray
Orazio Pinti
Assad A. Oberai
PINN
AI4CE
19
7
0
03 Jan 2023
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal Transport
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
22
6
0
02 Nov 2022
Analysis of the rate of convergence of an over-parametrized deep neural
  network estimate learned by gradient descent
Analysis of the rate of convergence of an over-parametrized deep neural network estimate learned by gradient descent
Michael Kohler
A. Krzyżak
32
10
0
04 Oct 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
52
6
0
17 Sep 2022
On the universal consistency of an over-parametrized deep neural network
  estimate learned by gradient descent
On the universal consistency of an over-parametrized deep neural network estimate learned by gradient descent
Selina Drews
Michael Kohler
30
13
0
30 Aug 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully
  Connected Neural Networks
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
21
0
0
25 Jul 2022
A general approximation lower bound in $L^p$ norm, with applications to
  feed-forward neural networks
A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networks
E. M. Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
32
7
0
09 Jun 2022
Qualitative neural network approximation over R and C: Elementary proofs
  for analytic and polynomial activation
Qualitative neural network approximation over R and C: Elementary proofs for analytic and polynomial activation
Josiah Park
Stephan Wojtowytsch
20
1
0
25 Mar 2022
A Note on Machine Learning Approach for Computational Imaging
A Note on Machine Learning Approach for Computational Imaging
Bin Dong
23
0
0
24 Feb 2022
Designing Universal Causal Deep Learning Models: The Geometric
  (Hyper)Transformer
Designing Universal Causal Deep Learning Models: The Geometric (Hyper)Transformer
Beatrice Acciaio
Anastasis Kratsios
G. Pammer
OOD
44
20
0
31 Jan 2022
Deep Nonparametric Estimation of Operators between Infinite Dimensional
  Spaces
Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces
Hao Liu
Haizhao Yang
Minshuo Chen
T. Zhao
Wenjing Liao
32
36
0
01 Jan 2022
Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed
  Number of Neurons
Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Zuowei Shen
Haizhao Yang
Shijun Zhang
48
36
0
06 Jul 2021
Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Zuowei Shen
Haizhao Yang
Shijun Zhang
101
115
0
28 Feb 2021
Quantitative approximation results for complex-valued neural networks
Quantitative approximation results for complex-valued neural networks
A. Caragea
D. Lee
J. Maly
G. Pfander
F. Voigtlaender
11
5
0
25 Feb 2021
Size and Depth Separation in Approximating Benign Functions with Neural
  Networks
Size and Depth Separation in Approximating Benign Functions with Neural Networks
Gal Vardi
Daniel Reichman
T. Pitassi
Ohad Shamir
21
7
0
30 Jan 2021
Reproducing Activation Function for Deep Learning
Reproducing Activation Function for Deep Learning
Senwei Liang
Liyao Lyu
Chunmei Wang
Haizhao Yang
30
21
0
13 Jan 2021
The universal approximation theorem for complex-valued neural networks
The universal approximation theorem for complex-valued neural networks
F. Voigtlaender
19
62
0
06 Dec 2020
Neural Network Approximation: Three Hidden Layers Are Enough
Neural Network Approximation: Three Hidden Layers Are Enough
Zuowei Shen
Haizhao Yang
Shijun Zhang
19
115
0
25 Oct 2020
Phase Transitions in Rate Distortion Theory and Deep Learning
Phase Transitions in Rate Distortion Theory and Deep Learning
Philipp Grohs
Andreas Klotz
F. Voigtlaender
14
7
0
03 Aug 2020
The Kolmogorov-Arnold representation theorem revisited
The Kolmogorov-Arnold representation theorem revisited
Johannes Schmidt-Hieber
30
125
0
31 Jul 2020
Expressivity of Deep Neural Networks
Expressivity of Deep Neural Networks
Ingo Gühring
Mones Raslan
Gitta Kutyniok
16
50
0
09 Jul 2020
Two-Layer Neural Networks for Partial Differential Equations:
  Optimization and Generalization Theory
Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory
Tao Luo
Haizhao Yang
29
73
0
28 Jun 2020
Approximation in shift-invariant spaces with deep ReLU neural networks
Approximation in shift-invariant spaces with deep ReLU neural networks
Yunfei Yang
Zhen Li
Yang Wang
31
14
0
25 May 2020
On Deep Instrumental Variables Estimate
On Deep Instrumental Variables Estimate
Ruiqi Liu
Zuofeng Shang
Guang Cheng
24
25
0
30 Apr 2020
Deep Network Approximation for Smooth Functions
Deep Network Approximation for Smooth Functions
Jianfeng Lu
Zuowei Shen
Haizhao Yang
Shijun Zhang
64
247
0
09 Jan 2020
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
142
602
0
14 Feb 2016
1