Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.00687
Cited By
v1
v2
v3
v4 (latest)
On the Power and Limitations of Random Features for Understanding Neural Networks
1 April 2019
Gilad Yehudai
Ohad Shamir
MLT
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"On the Power and Limitations of Random Features for Understanding Neural Networks"
41 / 91 papers shown
Title
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
189
377
0
17 Dec 2020
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods
Taiji Suzuki
Shunta Akiyama
MLT
65
12
0
06 Dec 2020
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
79
29
0
22 Oct 2020
Beyond Lazy Training for Over-parameterized Tensor Decomposition
Xiang Wang
Chenwei Wu
Jason D. Lee
Tengyu Ma
Rong Ge
91
14
0
22 Oct 2020
How Powerful are Shallow Neural Networks with Bandlimited Random Weights?
Ming Li
Sho Sonoda
Feilong Cao
Yu Wang
Jiye Liang
59
7
0
19 Aug 2020
When Hardness of Approximation Meets Hardness of Learning
Eran Malach
Shai Shalev-Shwartz
55
9
0
18 Aug 2020
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK
Yuanzhi Li
Tengyu Ma
Hongyang R. Zhang
MLT
95
27
0
09 Jul 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
96
14
0
02 Jul 2020
Statistical-Query Lower Bounds via Functional Gradients
Surbhi Goel
Aravind Gollakota
Adam R. Klivans
87
62
0
29 Jun 2020
Towards Understanding Hierarchical Learning: Benefits of Neural Representations
Minshuo Chen
Yu Bai
Jason D. Lee
T. Zhao
Huan Wang
Caiming Xiong
R. Socher
SSL
91
49
0
24 Jun 2020
When Do Neural Networks Outperform Kernel Methods?
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
137
189
0
24 Jun 2020
Network size and weights size for memorization with two-layers neural networks
Sébastien Bubeck
Ronen Eldan
Y. Lee
Dan Mikulincer
85
33
0
04 Jun 2020
Approximation Schemes for ReLU Regression
Ilias Diakonikolas
Surbhi Goel
Sushrut Karmalkar
Adam R. Klivans
Mahdi Soltanolkotabi
101
51
0
26 May 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
115
74
0
25 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
134
177
0
23 Apr 2020
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Pritish Kamath
Omar Montasser
Nathan Srebro
56
29
0
09 Mar 2020
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
117
143
0
29 Feb 2020
Uncertainty Quantification for Sparse Deep Learning
Yuexi Wang
Veronika Rockova
BDL
UQCV
111
31
0
26 Feb 2020
An Optimization and Generalization Analysis for Max-Pooling Networks
Alon Brutzkus
Amir Globerson
MLT
AI4CE
46
4
0
22 Feb 2020
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
104
78
0
18 Feb 2020
A closer look at the approximation capabilities of neural networks
Kai Fong Ernest Chong
39
16
0
16 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
77
22
0
10 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
130
277
0
03 Feb 2020
Learning a Single Neuron with Gradient Methods
Gilad Yehudai
Ohad Shamir
MLT
74
64
0
15 Jan 2020
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
77
123
0
27 Nov 2019
Nearly Minimal Over-Parametrization of Shallow Neural Networks
Armin Eftekhari
Chaehwan Song
Volkan Cevher
56
1
0
09 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
Jason D. Lee
69
116
0
03 Oct 2019
Dynamics of Deep Neural Networks and Neural Tangent Hierarchy
Jiaoyang Huang
H. Yau
62
151
0
18 Sep 2019
Limitations of Lazy Training of Two-layers Neural Networks
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
62
143
0
21 Jun 2019
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
93
89
0
12 Jun 2019
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
Yuan Cao
Quanquan Gu
MLT
AI4CE
143
392
0
30 May 2019
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
133
260
0
29 May 2019
Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes
Andrea Agazzi
Jianfeng Lu
98
8
0
27 May 2019
On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective
Lili Su
Pengkun Yang
MLT
80
54
0
26 May 2019
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
97
243
0
27 Apr 2019
Stabilize Deep ResNet with A Sharp Scaling Factor
τ
τ
τ
Huishuai Zhang
Da Yu
Mingyang Yi
Wei Chen
Tie-Yan Liu
57
9
0
17 Mar 2019
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
203
613
0
01 Jan 2019
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
840
0
19 Dec 2018
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
Jason D. Lee
Qiang Liu
Tengyu Ma
268
245
0
12 Oct 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
83
132
0
14 Aug 2018
Spurious Valleys in Two-layer Neural Network Optimization Landscapes
Luca Venturi
Afonso S. Bandeira
Joan Bruna
97
75
0
18 Feb 2018
Previous
1
2