Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.08399
Cited By
Approximation results for Gradient Descent trained Shallow Neural Networks in
1
d
1d
1
d
17 September 2022
R. Gentile
G. Welper
ODL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Approximation results for Gradient Descent trained Shallow Neural Networks in $1d$"
50 / 58 papers shown
Title
A brief review of the Deep BSDE method for solving high-dimensional partial differential equations
Jiequn Han
Arnulf Jentzen
Weinan E
AI4CE
63
0
0
07 May 2025
Neural Tangent Kernel Analysis of Deep Narrow Neural Networks
Jongmin Lee
Jooyeon Choi
Ernest K. Ryu
Albert No
23
10
0
07 Feb 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
62
8
0
02 Feb 2022
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization
Mariia Seleznova
Gitta Kutyniok
217
20
0
01 Feb 2022
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
Volkan Cevher
73
31
0
02 Nov 2021
Optimal Convergence Rates for the Orthogonal Greedy Algorithm
Jonathan W. Siegel
Jinchao Xu
100
18
0
28 Jun 2021
The Modern Mathematics of Deep Learning
Julius Berner
Philipp Grohs
Gitta Kutyniok
P. Petersen
43
116
0
09 May 2021
Universal scaling laws in the gradient descent training of neural networks
Maksim Velikanov
Dmitry Yarotsky
78
9
0
02 May 2021
Proof of the Theory-to-Practice Gap in Deep Learning via Sampling Complexity bounds for Neural Network Approximation Spaces
Philipp Grohs
F. Voigtlaender
64
36
0
06 Apr 2021
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Stanislav Fort
Gintare Karolina Dziugaite
Mansheej Paul
Sepideh Kharaghani
Daniel M. Roy
Surya Ganguli
105
193
0
28 Oct 2020
Exponential ReLU Neural Network Approximation Rates for Point and Edge Singularities
C. Marcati
J. Opschoor
P. Petersen
Christoph Schwab
61
30
0
23 Oct 2020
Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't
E. Weinan
Chao Ma
Stephan Wojtowytsch
Lei Wu
AI4CE
99
134
0
22 Sep 2020
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Lin Chen
Sheng Xu
148
94
0
22 Sep 2020
Complexity Measures for Neural Networks with General Activation Functions Using Path-based Norms
Zhong Li
Chao Ma
Lei Wu
53
24
0
14 Sep 2020
Finite Versus Infinite Neural Networks: an Empirical Study
Jaehoon Lee
S. Schoenholz
Jeffrey Pennington
Ben Adlam
Lechao Xiao
Roman Novak
Jascha Narain Sohl-Dickstein
77
214
0
31 Jul 2020
On the Similarity between the Laplace and Neural Tangent Kernels
Amnon Geifman
A. Yadav
Yoni Kasten
Meirav Galun
David Jacobs
Ronen Basri
125
95
0
03 Jul 2020
Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth
Guy Bresler
Dheeraj M. Nagaraj
43
21
0
07 Jun 2020
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology
Quynh N. Nguyen
Marco Mondelli
ODL
AI4CE
55
70
0
18 Feb 2020
The gap between theory and practice in function approximation with deep neural networks
Ben Adcock
N. Dexter
57
94
0
16 Jan 2020
Deep Network Approximation for Smooth Functions
Jianfeng Lu
Zuowei Shen
Haizhao Yang
Shijun Zhang
116
248
0
09 Jan 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
547
42,639
0
03 Dec 2019
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
75
123
0
27 Nov 2019
Neural tangent kernels, transportation mappings, and universal approximation
Ziwei Ji
Matus Telgarsky
Ruicheng Xian
59
39
0
15 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
52
116
0
03 Oct 2019
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Ziwei Ji
Matus Telgarsky
72
178
0
26 Sep 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
74
152
0
13 Sep 2019
Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes
Kenji Kawaguchi
Jiaoyang Huang
ODL
51
57
0
05 Aug 2019
The phase diagram of approximation rates for deep neural networks
Dmitry Yarotsky
Anton Zhevnerchuk
65
122
0
22 Jun 2019
An Improved Analysis of Training Over-parameterized Deep Neural Networks
Difan Zou
Quanquan Gu
63
235
0
11 Jun 2019
Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound
Zhao Song
Xin Yang
66
91
0
09 Jun 2019
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
91
260
0
29 May 2019
On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective
Lili Su
Pengkun Yang
MLT
68
54
0
26 May 2019
Approximation spaces of deep neural networks
Rémi Gribonval
Gitta Kutyniok
M. Nielsen
Felix Voigtländer
89
125
0
03 May 2019
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
238
928
0
26 Apr 2019
A Theoretical Analysis of Deep Neural Networks and Parametric PDEs
Gitta Kutyniok
P. Petersen
Mones Raslan
R. Schneider
87
198
0
31 Mar 2019
Nonlinear Approximation via Compositions
Zuowei Shen
Haizhao Yang
Shijun Zhang
70
91
0
26 Feb 2019
Error bounds for approximations with deep ReLU neural networks in
W
s
,
p
W^{s,p}
W
s
,
p
norms
Ingo Gühring
Gitta Kutyniok
P. Petersen
89
200
0
21 Feb 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
213
1,108
0
18 Feb 2019
Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Samet Oymak
Mahdi Soltanolkotabi
52
322
0
12 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
208
974
0
24 Jan 2019
Deep Neural Network Approximation Theory
Dennis Elbrächter
Dmytro Perekrestenko
Philipp Grohs
Helmut Bölcskei
68
210
0
08 Jan 2019
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
Matthieu Wyart
96
196
0
06 Jan 2019
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
840
0
19 Dec 2018
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
266
1,469
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
229
1,136
0
09 Nov 2018
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality
Taiji Suzuki
182
246
0
18 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
233
1,276
0
04 Oct 2018
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
219
653
0
03 Aug 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
273
3,223
0
20 Jun 2018
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
196
294
0
10 Feb 2018
1
2
Next