Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04485
Cited By
Benefits of depth in neural networks
14 February 2016
Matus Telgarsky
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Benefits of depth in neural networks"
50 / 353 papers shown
Title
Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
Arturs Berzins
25
5
0
12 Jun 2023
Representational Strengths and Limitations of Transformers
Clayton Sanford
Daniel J. Hsu
Matus Telgarsky
22
81
0
05 Jun 2023
On the Expressive Power of Neural Networks
J. Holstermann
17
3
0
31 May 2023
Probabilistic computation and uncertainty quantification with emerging covariance
He Ma
Yong Qi
Li Zhang
Wenlian Lu
Jianfeng Feng
11
1
0
30 May 2023
Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation
Liang Li
Yifei Duan
Guanghua Ji
Yongqiang Cai
MLT
32
13
0
29 May 2023
Data Topology-Dependent Upper Bounds of Neural Network Widths
Sangmin Lee
Jong Chul Ye
26
0
0
25 May 2023
VanillaNet: the Power of Minimalism in Deep Learning
Hanting Chen
Yunhe Wang
Jianyuan Guo
Dacheng Tao
VLM
34
85
0
22 May 2023
Minimax optimal density estimation using a shallow generative model with a one-dimensional latent variable
Hyeok Kyu Kwon
Minwoo Chae
DRL
23
3
0
11 May 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
38
13
0
11 May 2023
Approximation of Nonlinear Functionals Using Deep ReLU Networks
Linhao Song
Jun Fan
Dirong Chen
Ding-Xuan Zhou
15
14
0
10 Apr 2023
Depth Separation with Multilayer Mean-Field Networks
Y. Ren
Mo Zhou
Rong Ge
OOD
14
3
0
03 Apr 2023
Multi-task neural networks by learned contextual inputs
Anders T. Sandnes
B. Grimstad
O. Kolbjørnsen
14
1
0
01 Mar 2023
Are More Layers Beneficial to Graph Transformers?
Haiteng Zhao
Shuming Ma
Dongdong Zhang
Zhi-Hong Deng
Furu Wei
27
12
0
01 Mar 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
31
21
0
24 Feb 2023
Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data
Jonathan W. Siegel
6
2
0
02 Feb 2023
On the Lipschitz Constant of Deep Networks and Double Descent
Matteo Gamba
Hossein Azizpour
Marten Bjorkman
25
7
0
28 Jan 2023
Deep Convolutional Framelet Denoising for Panoramic by Mixed Wavelet Integration
Masoud Mohammadi
Seyed Javad Mahdavi Chabok
MedIm
14
0
0
25 Jan 2023
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
22
0
0
17 Jan 2023
Limitations on approximation by deep and shallow neural networks
G. Petrova
P. Wojtaszczyk
11
7
0
30 Nov 2022
A Kernel Perspective of Skip Connections in Convolutional Networks
Daniel Barzilai
Amnon Geifman
Meirav Galun
Ronen Basri
17
11
0
27 Nov 2022
Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces
Jonathan W. Siegel
20
28
0
25 Nov 2022
LU decomposition and Toeplitz decomposition of a neural network
Yucong Liu
Simiao Jiao
Lek-Heng Lim
30
7
0
25 Nov 2022
Leveraging Heteroscedastic Uncertainty in Learning Complex Spectral Mapping for Single-channel Speech Enhancement
Kuan-Lin Chen
Daniel D. E. Wong
Ke Tan
Buye Xu
Anurag Kumar
V. Ithapu
19
1
0
16 Nov 2022
Universal Time-Uniform Trajectory Approximation for Random Dynamical Systems with Recurrent Neural Networks
A. Bishop
37
1
0
15 Nov 2022
Exponentially Improving the Complexity of Simulating the Weisfeiler-Lehman Test with Graph Neural Networks
Anders Aamand
Justin Y. Chen
Piotr Indyk
Shyam Narayanan
R. Rubinfeld
Nicholas Schiefer
Sandeep Silwal
Tal Wagner
39
21
0
06 Nov 2022
When Expressivity Meets Trainability: Fewer than
n
n
n
Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Z. Luo
26
10
0
21 Oct 2022
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
40
155
0
19 Oct 2022
Improved Bounds on Neural Complexity for Representing Piecewise Linear Functions
Kuan-Lin Chen
H. Garudadri
Bhaskar D. Rao
11
18
0
13 Oct 2022
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Vaggos Chatziafratis
Ioannis Panageas
Clayton Sanford
S. Stavroulakis
11
2
0
11 Oct 2022
Factor Augmented Sparse Throughput Deep ReLU Neural Networks for High Dimensional Regression
Jianqing Fan
Yihong Gu
14
21
0
05 Oct 2022
Enumeration of max-pooling responses with generalized permutohedra
Laura Escobar
Patricio Gallardo
Javier González-Anaya
J. L. González
Guido Montúfar
A. Morales
14
1
0
29 Sep 2022
Achieve the Minimum Width of Neural Networks for Universal Approximation
Yongqiang Cai
9
18
0
23 Sep 2022
Optimal bump functions for shallow ReLU networks: Weight decay, depth separation and the curse of dimensionality
Stephan Wojtowytsch
22
1
0
02 Sep 2022
Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution
Jianhao Ma
S. Fattahi
42
5
0
15 Jul 2022
Concentration inequalities and optimal number of layers for stochastic deep neural networks
Michele Caprio
Sayan Mukherjee
BDL
17
1
0
22 Jun 2022
Deep Partial Least Squares for Empirical Asset Pricing
M. Dixon
Nicholas G. Polson
Kemen Goicoechea
26
2
0
20 Jun 2022
Coin Flipping Neural Networks
Yuval Sieradzki
Nitzan Hodos
Gal Yehuda
Assaf Schuster
UQCV
27
3
0
18 Jun 2022
Intrinsic dimensionality and generalization properties of the
R
\mathcal{R}
R
-norm inductive bias
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
CML
AI4CE
18
6
0
10 Jun 2022
A general approximation lower bound in
L
p
L^p
L
p
norm, with applications to feed-forward neural networks
E. M. Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
29
7
0
09 Jun 2022
Exponential Separations in Symmetric Neural Networks
Aaron Zweig
Joan Bruna
27
7
0
02 Jun 2022
Asymptotic Properties for Bayesian Neural Network in Besov Space
Kyeongwon Lee
Jaeyong Lee
BDL
11
4
0
01 Jun 2022
Universality of Group Convolutional Neural Networks Based on Ridgelet Analysis on Groups
Sho Sonoda
Isao Ishikawa
Masahiro Ikeda
30
9
0
30 May 2022
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
Binghui Li
Jikai Jin
Han Zhong
J. Hopcroft
Liwei Wang
OOD
79
27
0
27 May 2022
Embedding Principle in Depth for the Loss Landscape Analysis of Deep Neural Networks
Zhiwei Bai
Tao Luo
Z. Xu
Yaoyu Zhang
23
4
0
26 May 2022
CNNs Avoid Curse of Dimensionality by Learning on Patches
Vamshi C. Madala
S. Chandrasekaran
Jason Bunk
UQCV
27
5
0
22 May 2022
On the inability of Gaussian process regression to optimally learn compositional functions
M. Giordano
Kolyan Ray
Johannes Schmidt-Hieber
33
12
0
16 May 2022
ExSpliNet: An interpretable and expressive spline-based neural network
Daniele Fakhoury
Emanuele Fakhoury
H. Speleers
11
33
0
03 May 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
57
30
0
04 Apr 2022
How do noise tails impact on deep ReLU networks?
Jianqing Fan
Yihong Gu
Wen-Xin Zhou
ODL
38
13
0
20 Mar 2022
Towards understanding deep learning with the natural clustering prior
Simon Carbonnelle
13
0
0
15 Mar 2022
Previous
1
2
3
4
5
6
7
8
Next