Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.02930
Cited By
Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks
8 March 2017
Peter L. Bartlett
Nick Harvey
Christopher Liaw
Abbas Mehrabian
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks"
25 / 125 papers shown
Title
Memory capacity of neural networks with threshold and ReLU activations
Roman Vershynin
38
21
0
20 Jan 2020
Deep Gamblers: Learning to Abstain with Portfolio Theory
Liu Ziyin
Zhikang T. Wang
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
Masahito Ueda
43
110
0
29 Jun 2019
The phase diagram of approximation rates for deep neural networks
Dmitry Yarotsky
Anton Zhevnerchuk
35
121
0
22 Jun 2019
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks
Yaoyu Zhang
Zhi-Qin John Xu
Yaoyu Zhang
Zheng Ma
MLT
AI4CE
61
38
0
24 May 2019
How degenerate is the parametrization of neural networks with the ReLU activation function?
Julius Berner
Dennis Elbrächter
Philipp Grohs
ODL
45
28
0
23 May 2019
A lattice-based approach to the expressivity of deep ReLU neural networks
V. Corlay
J. Boutros
P. Ciblat
L. Brunel
37
4
0
28 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
63
961
0
24 Jan 2019
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
Zhi-Qin John Xu
Yaoyu Zhang
Yaoyu Zhang
Yan Xiao
Zheng Ma
63
505
0
19 Jan 2019
The capacity of feedforward neural networks
Pierre Baldi
Roman Vershynin
25
67
0
02 Jan 2019
Multitask Learning Deep Neural Networks to Combine Revealed and Stated Preference Data
Shenhao Wang
Qingyi Wang
Jinhuan Zhao
AI4TS
25
21
0
02 Jan 2019
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
48
598
0
01 Jan 2019
On the potential for open-endedness in neural networks
N. Guttenberg
N. Virgo
A. Penn
28
10
0
12 Dec 2018
Deep Active Learning with a Neural Architecture Search
Yonatan Geifman
Ran El-Yaniv
AI4CE
22
44
0
19 Nov 2018
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
39
117
0
17 Oct 2018
Learning Compressed Transforms with Low Displacement Rank
Anna T. Thomas
Albert Gu
Tri Dao
Atri Rudra
Christopher Ré
32
40
0
04 Oct 2018
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations
Julius Berner
Philipp Grohs
Arnulf Jentzen
29
182
0
09 Sep 2018
Deep learning generalizes because the parameter-function map is biased towards simple functions
Guillermo Valle Pérez
Chico Q. Camargo
A. Louis
MLT
AI4CE
23
228
0
22 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
39
57
0
21 May 2018
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
S. Du
Jason D. Lee
62
268
0
03 Mar 2018
Functional Gradient Boosting based on Residual Network Perception
Atsushi Nitanda
Taiji Suzuki
25
27
0
25 Feb 2018
DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training
Nathan Kallus
CML
OOD
33
76
0
15 Feb 2018
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
32
294
0
10 Feb 2018
Implicit Regularization in Deep Learning
Behnam Neyshabur
17
145
0
06 Sep 2017
Exploring Generalization in Deep Learning
Behnam Neyshabur
Srinadh Bhojanapalli
David A. McAllester
Nathan Srebro
FAtt
110
1,241
0
27 Jun 2017
Benefits of depth in neural networks
Matus Telgarsky
195
606
0
14 Feb 2016
Previous
1
2
3