Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1610.09887
Cited By
Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks
31 October 2016
Itay Safran
Ohad Shamir
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks"
42 / 42 papers shown
Title
On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks
Yunfei Yang
62
2
0
02 Sep 2024
Spectral complexity of deep neural networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
S. Vigogna
BDL
82
1
0
15 May 2024
From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport
Quentin Bouniot
I. Redko
Anton Mallasto
Charlotte Laclau
Karol Arndt
Oliver Struckmeier
Markus Heinonen
Ville Kyrki
Samuel Kaski
56
2
0
17 Oct 2023
Data Topology-Dependent Upper Bounds of Neural Network Widths
Sangmin Lee
Jong Chul Ye
26
0
0
25 May 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
41
13
0
11 May 2023
Deep neural network approximation of composite functions without the curse of dimensionality
Adrian Riekert
24
0
0
12 Apr 2023
Error convergence and engineering-guided hyperparameter search of PINNs: towards optimized I-FENN performance
Panos Pantidis
Habiba Eldababy
Christopher Miguel Tagle
M. Mobasher
35
20
0
03 Mar 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
34
21
0
24 Feb 2023
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data
Jonathan W. Siegel
11
2
0
02 Feb 2023
Is Out-of-Distribution Detection Learnable?
Zhen Fang
Yixuan Li
Jie Lu
Jiahua Dong
Bo Han
Feng Liu
OODD
32
124
0
26 Oct 2022
When Expressivity Meets Trainability: Fewer than
n
n
n
Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
26
10
0
21 Oct 2022
On the Privacy Risks of Cell-Based NAS Architectures
Haiping Huang
Zhikun Zhang
Yun Shen
Michael Backes
Qi Li
Yang Zhang
27
7
0
04 Sep 2022
Intrinsic dimensionality and generalization properties of the
R
\mathcal{R}
R
-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
18
6
0
10 Jun 2022
Exponential Separations in Symmetric Neural Networks
Aaron Zweig
Joan Bruna
32
7
0
02 Jun 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
59
30
0
04 Apr 2022
Interplay between depth of neural networks and locality of target functions
Takashi Mori
Masakuni Ueda
25
0
0
28 Jan 2022
Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators
Carles Domingo-Enrich
MLT
MDE
19
0
0
27 Dec 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
46
100
0
07 Oct 2021
Meta Internal Learning
Raphael Bensadoun
Shir Gur
Tomer Galanti
Lior Wolf
GAN
28
8
0
06 Oct 2021
Theory of Deep Convolutional Neural Networks III: Approximating Radial Functions
Tong Mao
Zhongjie Shi
Ding-Xuan Zhou
16
33
0
02 Jul 2021
Layer Folding: Neural Network Depth Reduction using Activation Linearization
Amir Ben Dror
Niv Zehngut
Avraham Raviv
E. Artyomov
Ran Vitek
R. Jevnisek
29
20
0
17 Jun 2021
Quantitative approximation results for complex-valued neural networks
A. Caragea
D. Lee
J. Maly
G. Pfander
F. Voigtlaender
13
5
0
25 Feb 2021
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
Size and Depth Separation in Approximating Benign Functions with Neural Networks
Gal Vardi
Daniel Reichman
T. Pitassi
Ohad Shamir
23
7
0
30 Jan 2021
A case where a spindly two-layer linear network whips any neural network with a fully connected input layer
Manfred K. Warmuth
W. Kotłowski
Ehsan Amid
MLT
20
1
0
16 Oct 2020
Phase Transitions in Rate Distortion Theory and Deep Learning
Philipp Grohs
Andreas Klotz
F. Voigtlaender
14
7
0
03 Aug 2020
Expressivity of Deep Neural Networks
Ingo Gühring
Mones Raslan
Gitta Kutyniok
16
51
0
09 Jul 2020
Is deeper better? It depends on locality of relevant features
Takashi Mori
Masahito Ueda
OOD
17
4
0
26 May 2020
Optimal Function Approximation with Relu Neural Networks
Bo Liu
Yi Liang
25
33
0
09 Sep 2019
Deep Network Approximation Characterized by Number of Neurons
Zuowei Shen
Haizhao Yang
Shijun Zhang
20
182
0
13 Jun 2019
Is Deeper Better only when Shallow is Good?
Eran Malach
Shai Shalev-Shwartz
28
45
0
08 Mar 2019
Nonlinear Approximation via Compositions
Zuowei Shen
Haizhao Yang
Shijun Zhang
20
92
0
26 Feb 2019
Error bounds for approximations with deep ReLU neural networks in
W
s
,
p
W^{s,p}
W
s
,
p
norms
Ingo Gühring
Gitta Kutyniok
P. Petersen
12
199
0
21 Feb 2019
The Oracle of DLphi
Dominik Alfke
W. Baines
J. Blechschmidt
Mauricio J. del Razo Sarmina
Amnon Drory
...
L. Thesing
Philipp Trunschke
Johannes von Lindheim
David Weber
Melanie Weber
37
0
0
17 Jan 2019
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
20
117
0
17 Oct 2018
Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units
Yixi Xu
Tianlin Li
MQ
23
12
0
03 Oct 2018
Learning Restricted Boltzmann Machines via Influence Maximization
Guy Bresler
Frederic Koehler
Ankur Moitra
Elchanan Mossel
AI4CE
20
29
0
25 May 2018
Limits on representing Boolean functions by linear combinations of simple functions: thresholds, ReLUs, and low-degree polynomials
Richard Ryan Williams
27
26
0
26 Feb 2018
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
13
293
0
10 Feb 2018
Mixing Complexity and its Applications to Neural Networks
Michal Moshkovitz
Naftali Tishby
16
11
0
02 Mar 2017
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
30
633
0
04 Nov 2016
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
1