Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.10295
Cited By
Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem
19 October 2021
Clayton Sanford
Vaggos Chatziafratis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem"
20 / 20 papers shown
Title
On the Approximation Power of Two-Layer Networks of Random ReLUs
Daniel J. Hsu
Clayton Sanford
Rocco A. Servedio
Emmanouil-Vasileios Vlatakis-Gkaragkounis
43
25
0
03 Feb 2021
Depth-Width Trade-offs for Neural Networks via Topological Entropy
Kaifeng Bu
Yaobo Zhang
Qingxian Luo
44
8
0
15 Oct 2020
Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth
Guy Bresler
Dheeraj M. Nagaraj
38
21
0
07 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
721
41,894
0
28 May 2020
Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems
Vaggos Chatziafratis
Sai Ganesh Nagarajan
Ioannis Panageas
42
15
0
02 Mar 2020
Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem
Vaggos Chatziafratis
Sai Ganesh Nagarajan
Ioannis Panageas
Tianlin Li
39
21
0
09 Dec 2019
Deep ReLU Networks Have Surprisingly Few Activation Patterns
Boris Hanin
David Rolnick
78
226
0
03 Jun 2019
On the Expressive Power of Deep Polynomial Neural Networks
Joe Kileel
Matthew Trager
Joan Bruna
59
82
0
29 May 2019
Depth Separations in Neural Networks: What is Actually Being Separated?
Itay Safran
Ronen Eldan
Ohad Shamir
MDE
51
36
0
15 Apr 2019
Is Deeper Better only when Shallow is Good?
Eran Malach
Shai Shalev-Shwartz
48
45
0
08 Mar 2019
Depth Separation for Neural Networks
Amit Daniely
MDE
37
74
0
27 Feb 2017
On the ability of neural nets to express distributions
Holden Lee
Rong Ge
Tengyu Ma
Andrej Risteski
Sanjeev Arora
BDL
64
84
0
22 Feb 2017
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
143
641
0
04 Nov 2016
Exponential expressivity in deep neural networks through transient chaos
Ben Poole
Subhaneil Lahiri
M. Raghu
Jascha Narain Sohl-Dickstein
Surya Ganguli
88
591
0
16 Jun 2016
On the Expressive Power of Deep Neural Networks
M. Raghu
Ben Poole
Jon M. Kleinberg
Surya Ganguli
Jascha Narain Sohl-Dickstein
61
788
0
16 Jun 2016
Benefits of depth in neural networks
Matus Telgarsky
348
608
0
14 Feb 2016
The Power of Depth for Feedforward Neural Networks
Ronen Eldan
Ohad Shamir
213
732
0
12 Dec 2015
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,814
0
10 Dec 2015
Representation Benefits of Deep Feedforward Networks
Matus Telgarsky
76
242
0
27 Sep 2015
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
88
1,254
0
08 Feb 2014
1