Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.09870
Cited By
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
23 May 2019
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems"
8 / 8 papers shown
Title
Sharper Guarantees for Learning Neural Network Classifiers with Gradient Methods
Hossein Taheri
Christos Thrampoulidis
Arya Mazumdar
MLT
33
0
0
13 Oct 2024
Fine-grained analysis of non-parametric estimation for pairwise learning
Junyu Zhou
Shuo Huang
Han Feng
Puyu Wang
Ding-Xuan Zhou
40
1
0
31 May 2023
Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks
Puyu Wang
Yunwen Lei
Di Wang
Yiming Ying
Ding-Xuan Zhou
MLT
27
3
0
26 May 2023
When Expressivity Meets Trainability: Fewer than
n
n
n
Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Z. Luo
26
10
0
21 Oct 2022
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
24
3
0
02 Jul 2022
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao-quan Song
David P. Woodruff
27
15
0
26 Jun 2022
A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your Pre-training Effective?
Hiroaki Mikami
Kenji Fukumizu
Shogo Murai
Shuji Suzuki
Yuta Kikuchi
Taiji Suzuki
S. Maeda
Kohei Hayashi
40
12
0
25 Aug 2021
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
Andrea Montanari
Yiqiao Zhong
47
95
0
25 Jul 2020
1