Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.04339
Cited By
When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models?
12 June 2018
Tengyu Xu
Yi Zhou
Kaiyi Ji
Yingbin Liang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models?"
7 / 7 papers shown
Title
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks
Bohan Wang
Qi Meng
Wei Chen
Tie-Yan Liu
22
33
0
11 Dec 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
16
327
0
11 Feb 2020
Sampling Bias in Deep Active Classification: An Empirical Study
Ameya Prabhu
Charles Dognin
M. Singh
11
64
0
20 Sep 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
34
321
0
13 Jun 2019
Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models
Mor Shpigel Nacson
Suriya Gunasekar
J. Lee
Nathan Srebro
Daniel Soudry
17
91
0
17 May 2019
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
24
131
0
14 Aug 2018
Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate
Mor Shpigel Nacson
Nathan Srebro
Daniel Soudry
FedML
MLT
19
97
0
05 Jun 2018
1