ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03620
  4. Cited By
Optimal approximation of continuous functions by very deep ReLU networks
v1v2 (latest)

Optimal approximation of continuous functions by very deep ReLU networks

10 February 2018
Dmitry Yarotsky
ArXiv (abs)PDFHTML

Papers citing "Optimal approximation of continuous functions by very deep ReLU networks"

50 / 188 papers shown
Title
Approximation Results for Gradient Descent trained Neural Networks
Approximation Results for Gradient Descent trained Neural Networks
G. Welper
68
1
0
09 Sep 2023
Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified
  Models
Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models
Changyu Liu
Yuling Jiao
Junhui Wang
Jian Huang
AAML
43
2
0
02 Sep 2023
On the Optimal Expressive Power of ReLU DNNs and Its Application in
  Approximation with Kolmogorov Superposition Theorem
On the Optimal Expressive Power of ReLU DNNs and Its Application in Approximation with Kolmogorov Superposition Theorem
Juncai He
96
11
0
10 Aug 2023
Tractability of approximation by general shallow networks
Tractability of approximation by general shallow networks
H. Mhaskar
Tong Mao
51
2
0
07 Aug 2023
Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
Shijun Zhang
Jianfeng Lu
Hongkai Zhao
58
21
0
13 Jul 2023
Why Shallow Networks Struggle to Approximate and Learn High Frequencies
Why Shallow Networks Struggle to Approximate and Learn High Frequencies
Shijun Zhang
Hongkai Zhao
Yimin Zhong
Haomin Zhou
79
6
0
29 Jun 2023
Universal approximation with complex-valued deep narrow neural networks
Universal approximation with complex-valued deep narrow neural networks
Paul Geuchen
Thomas Jahn
Hannes Matt
52
4
0
26 May 2023
Exploring the Complexity of Deep Neural Networks through Functional
  Equivalence
Exploring the Complexity of Deep Neural Networks through Functional Equivalence
Guohao Shen
103
4
0
19 May 2023
Differentiable Neural Networks with RePU Activation: with Applications
  to Score Estimation and Isotonic Regression
Differentiable Neural Networks with RePU Activation: with Applications to Score Estimation and Isotonic Regression
Guohao Shen
Yuling Jiao
Yuanyuan Lin
Jian Huang
123
3
0
01 May 2023
Deep neural network approximation of composite functions without the
  curse of dimensionality
Deep neural network approximation of composite functions without the curse of dimensionality
Adrian Riekert
55
0
0
12 Apr 2023
Approximation of Nonlinear Functionals Using Deep ReLU Networks
Approximation of Nonlinear Functionals Using Deep ReLU Networks
Linhao Song
Jun Fan
Dirong Chen
Ding-Xuan Zhou
178
16
0
10 Apr 2023
Optimal rates of approximation by shallow ReLU$^k$ neural networks and
  applications to nonparametric regression
Optimal rates of approximation by shallow ReLUk^kk neural networks and applications to nonparametric regression
Yunfei Yang
Ding-Xuan Zhou
194
22
0
04 Apr 2023
Operator learning with PCA-Net: upper and lower complexity bounds
Operator learning with PCA-Net: upper and lower complexity bounds
S. Lanthaler
95
26
0
28 Mar 2023
Error convergence and engineering-guided hyperparameter search of PINNs:
  towards optimized I-FENN performance
Error convergence and engineering-guided hyperparameter search of PINNs: towards optimized I-FENN performance
Panos Pantidis
Habiba Eldababy
Christopher Miguel Tagle
M. Mobasher
73
22
0
03 Mar 2023
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at
  Irregularly Spaced Data
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data
Jonathan W. Siegel
81
2
0
02 Feb 2023
On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU
  Network
On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
Shijun Zhang
Jianfeng Lu
Hongkai Zhao
CoGe
108
4
0
29 Jan 2023
Deep Operator Learning Lessens the Curse of Dimensionality for PDEs
Deep Operator Learning Lessens the Curse of Dimensionality for PDEs
Ke Chen
Chunmei Wang
Haizhao Yang
AI4CE
80
14
0
28 Jan 2023
Selected aspects of complex, hypercomplex and fuzzy neural networks
Selected aspects of complex, hypercomplex and fuzzy neural networks
A. Niemczynowicz
Radosław Antoni Kycia
Maciej Jaworski
A. Siemaszko
J. Calabuig
...
Baruch Schneider
Diana Berseghyan
Irina Perfiljeva
V. Novák
Piotr Artiemjew
61
0
0
29 Dec 2022
On Solution Functions of Optimization: Universal Approximation and
  Covering Number Bounds
On Solution Functions of Optimization: Universal Approximation and Covering Number Bounds
Ming Jin
Vanshaj Khattar
Harshal D. Kaushik
Bilgehan Sel
R. Jia
61
8
0
02 Dec 2022
Limitations on approximation by deep and shallow neural networks
Limitations on approximation by deep and shallow neural networks
G. Petrova
P. Wojtaszczyk
115
9
0
30 Nov 2022
Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and
  Besov Spaces
Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
Jonathan W. Siegel
195
30
0
25 Nov 2022
On the Universal Approximation Property of Deep Fully Convolutional
  Neural Networks
On the Universal Approximation Property of Deep Fully Convolutional Neural Networks
Ting-Wei Lin
Zuowei Shen
Qianxiao Li
80
4
0
25 Nov 2022
Convergence analysis of unsupervised Legendre-Galerkin neural networks
  for linear second-order elliptic PDEs
Convergence analysis of unsupervised Legendre-Galerkin neural networks for linear second-order elliptic PDEs
Seungchan Ko
S. Yun
Youngjoon Hong
78
5
0
16 Nov 2022
Universal Time-Uniform Trajectory Approximation for Random Dynamical
  Systems with Recurrent Neural Networks
Universal Time-Uniform Trajectory Approximation for Random Dynamical Systems with Recurrent Neural Networks
A. Bishop
85
1
0
15 Nov 2022
Active Learning with Neural Networks: Insights from Nonparametric
  Statistics
Active Learning with Neural Networks: Insights from Nonparametric Statistics
Yinglun Zhu
Robert D. Nowak
102
7
0
15 Oct 2022
Probabilistic partition of unity networks for high-dimensional
  regression problems
Probabilistic partition of unity networks for high-dimensional regression problems
Tiffany Fan
N. Trask
M. DÉlia
Eric F. Darve
60
1
0
06 Oct 2022
Factor Augmented Sparse Throughput Deep ReLU Neural Networks for High
  Dimensional Regression
Factor Augmented Sparse Throughput Deep ReLU Neural Networks for High Dimensional Regression
Jianqing Fan
Yihong Gu
93
23
0
05 Oct 2022
Limitations of neural network training due to numerical instability of
  backpropagation
Limitations of neural network training due to numerical instability of backpropagation
Clemens Karner
V. Kazeev
P. Petersen
78
3
0
03 Oct 2022
Parameter-varying neural ordinary differential equations with
  partition-of-unity networks
Parameter-varying neural ordinary differential equations with partition-of-unity networks
Kookjin Lee
N. Trask
82
2
0
01 Oct 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
104
7
0
17 Sep 2022
Seeking Interpretability and Explainability in Binary Activated Neural
  Networks
Seeking Interpretability and Explainability in Binary Activated Neural Networks
Benjamin Leblanc
Pascal Germain
FAtt
107
1
0
07 Sep 2022
Neural Network Approximation of Continuous Functions in High Dimensions
  with Applications to Inverse Problems
Neural Network Approximation of Continuous Functions in High Dimensions with Applications to Inverse Problems
Santhosh Karnik
Rongrong Wang
M. Iwen
55
2
0
28 Aug 2022
CAS4DL: Christoffel Adaptive Sampling for function approximation via
  Deep Learning
CAS4DL: Christoffel Adaptive Sampling for function approximation via Deep Learning
Ben Adcock
Juan M. Cardenas
N. Dexter
88
10
0
25 Aug 2022
Deep Neural Network Approximation of Invariant Functions through
  Dynamical Systems
Deep Neural Network Approximation of Invariant Functions through Dynamical Systems
Qianxiao Li
T. Lin
Zuowei Shen
73
6
0
18 Aug 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully
  Connected Neural Networks
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
97
1
0
25 Jul 2022
Estimation of Non-Crossing Quantile Regression Process with Deep ReQU
  Neural Networks
Estimation of Non-Crossing Quantile Regression Process with Deep ReQU Neural Networks
Guohao Shen
Yuling Jiao
Yuanyuan Lin
J. Horowitz
Jian Huang
48
4
0
21 Jul 2022
Consistency of Neural Networks with Regularization
Consistency of Neural Networks with Regularization
Xiaoxi Shen
Jinghang Lin
39
0
0
22 Jun 2022
A general approximation lower bound in $L^p$ norm, with applications to
  feed-forward neural networks
A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networks
El Mehdi Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
74
7
0
09 Jun 2022
Neural Network Architecture Beyond Width and Depth
Neural Network Architecture Beyond Width and Depth
Zuowei Shen
Haizhao Yang
Shijun Zhang
3DVMDE
114
13
0
19 May 2022
On the inability of Gaussian process regression to optimally learn
  compositional functions
On the inability of Gaussian process regression to optimally learn compositional functions
M. Giordano
Kolyan Ray
Johannes Schmidt-Hieber
118
13
0
16 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCVMLT
77
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
115
18
0
04 May 2022
ExSpliNet: An interpretable and expressive spline-based neural network
ExSpliNet: An interpretable and expressive spline-based neural network
Daniele Fakhoury
Emanuele Fakhoury
H. Speleers
69
41
0
03 May 2022
Do ReLU Networks Have An Edge When Approximating Compactly-Supported
  Functions?
Do ReLU Networks Have An Edge When Approximating Compactly-Supported Functions?
Anastasis Kratsios
Behnoosh Zamanlooy
MLT
117
3
0
24 Apr 2022
How do noise tails impact on deep ReLU networks?
How do noise tails impact on deep ReLU networks?
Jianqing Fan
Yihong Gu
Wen-Xin Zhou
ODL
126
13
0
20 Mar 2022
IAE-Net: Integral Autoencoders for Discretization-Invariant Learning
IAE-Net: Integral Autoencoders for Discretization-Invariant Learning
Yong Zheng Ong
Zuowei Shen
Haizhao Yang
89
16
0
10 Mar 2022
Designing Universal Causal Deep Learning Models: The Geometric
  (Hyper)Transformer
Designing Universal Causal Deep Learning Models: The Geometric (Hyper)Transformer
Beatrice Acciaio
Anastasis Kratsios
G. Pammer
OOD
136
23
0
31 Jan 2022
Approximation bounds for norm constrained neural networks with
  applications to regression and GANs
Approximation bounds for norm constrained neural networks with applications to regression and GANs
Yuling Jiao
Yang Wang
Yunfei Yang
85
20
0
24 Jan 2022
Learning Neural Ranking Models Online from Implicit User Feedback
Learning Neural Ranking Models Online from Implicit User Feedback
Yiling Jia
Hongning Wang
94
6
0
17 Jan 2022
Deep Nonparametric Estimation of Operators between Infinite Dimensional
  Spaces
Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces
Hao Liu
Haizhao Yang
Minshuo Chen
T. Zhao
Wenjing Liao
216
39
0
01 Jan 2022
Previous
1234
Next