ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.01152
  4. Cited By
Provable Generalization of SGD-trained Neural Networks of Any Width in
  the Presence of Adversarial Label Noise

Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise

4 January 2021
Spencer Frei
Yuan Cao
Quanquan Gu
    FedML
    MLT
ArXivPDFHTML

Papers citing "Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise"

19 / 19 papers shown
Title
Deep learning versus kernel learning: an empirical study of loss
  landscape geometry and the time evolution of the Neural Tangent Kernel
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Stanislav Fort
Gintare Karolina Dziugaite
Mansheej Paul
Sepideh Kharaghani
Daniel M. Roy
Surya Ganguli
97
190
0
28 Oct 2020
Noise in Classification
Noise in Classification
Maria-Florina Balcan
Nika Haghtalab
26
11
0
10 Oct 2020
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK
Yuanzhi Li
Tengyu Ma
Hongyang R. Zhang
MLT
41
27
0
09 Jul 2020
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU
  Networks
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks
Ilias Diakonikolas
D. Kane
Vasilis Kontonis
Nikos Zarifis
51
65
0
22 Jun 2020
The Pitfalls of Simplicity Bias in Neural Networks
The Pitfalls of Simplicity Bias in Neural Networks
Harshay Shah
Kaustav Tamuly
Aditi Raghunathan
Prateek Jain
Praneeth Netrapalli
AAML
65
359
0
13 Jun 2020
Non-Convex SGD Learns Halfspaces with Adversarial Label Noise
Non-Convex SGD Learns Halfspaces with Adversarial Label Noise
Ilias Diakonikolas
Vasilis Kontonis
Christos Tzamos
Nikos Zarifis
37
28
0
11 Jun 2020
Directional convergence and alignment in deep learning
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
54
171
0
11 Jun 2020
Learning Halfspaces with Massart Noise Under Structured Distributions
Learning Halfspaces with Massart Noise Under Structured Distributions
Ilias Diakonikolas
Vasilis Kontonis
Christos Tzamos
Nikos Zarifis
44
61
0
13 Feb 2020
How Much Over-parameterization Is Sufficient to Learn Deep ReLU
  Networks?
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
59
122
0
27 Nov 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
79
335
0
13 Jun 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
Jason D. Lee
Daniel Soudry
Nathan Srebro
75
363
0
13 Jun 2019
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep
  Neural Networks
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
Yuan Cao
Quanquan Gu
MLT
AI4CE
80
389
0
30 May 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
388
183
0
24 May 2019
Generalization Error Bounds of Gradient Descent for Learning
  Over-parameterized Deep ReLU Networks
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODL
MLT
AI4CE
76
156
0
04 Feb 2019
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
178
448
0
21 Nov 2018
Regularization Matters: Generalization and Optimization of Neural Nets
  v.s. their Induced Kernel
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
Jason D. Lee
Qiang Liu
Tengyu Ma
193
244
0
12 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
214
1,272
0
04 Oct 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
336
4,626
0
10 Nov 2016
1