Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1710.10174
Cited By
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
27 October 2017
Alon Brutzkus
Amir Globerson
Eran Malach
Shai Shalev-Shwartz
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data"
11 / 61 papers shown
Title
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
20
117
0
17 Oct 2018
A Priori Estimates of the Population Risk for Two-layer Neural Networks
Weinan E
Chao Ma
Lei Wu
29
130
0
15 Oct 2018
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
20
243
0
12 Oct 2018
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks
Sanjeev Arora
Nadav Cohen
Noah Golowich
Wei Hu
21
281
0
04 Oct 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
24
131
0
14 Aug 2018
Generalization Error in Deep Learning
Daniel Jakubovitz
Raja Giryes
M. Rodrigues
AI4CE
32
109
0
03 Aug 2018
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
36
227
0
28 Jun 2018
When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models?
Tengyu Xu
Yi Zhou
Kaiyi Ji
Yingbin Liang
29
19
0
12 Jun 2018
Data augmentation instead of explicit regularization
Alex Hernández-García
Peter König
30
141
0
11 Jun 2018
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Peter L. Bartlett
D. Helmbold
Philip M. Long
30
116
0
16 Feb 2018
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
Previous
1
2