ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11680
  4. Cited By
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks

Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks

27 March 2019
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
    NoLa
ArXivPDFHTML

Papers citing "Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"

26 / 176 papers shown
Title
Heteroskedastic and Imbalanced Deep Learning with Adaptive
  Regularization
Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Kaidi Cao
Yining Chen
Junwei Lu
Nikos Arechiga
Adrien Gaidon
Tengyu Ma
20
68
0
29 Jun 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
21
112
0
24 Jun 2020
Generalisation Guarantees for Continual Learning with Orthogonal
  Gradient Descent
Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent
Mehdi Abbana Bennani
Thang Doan
Masashi Sugiyama
CLL
50
61
0
21 Jun 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
28
3
0
19 Jun 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
34
32
0
18 Jun 2020
Part-dependent Label Noise: Towards Instance-dependent Label Noise
Part-dependent Label Noise: Towards Instance-dependent Label Noise
Xiaobo Xia
Tongliang Liu
Bo Han
Nannan Wang
Biwei Huang
Haifeng Liu
Gang Niu
Dacheng Tao
Masashi Sugiyama
NoLa
13
67
0
14 Jun 2020
Generalization by Recognizing Confusion
Generalization by Recognizing Confusion
Daniel Chiu
Franklyn Wang
S. Kominers
NoLa
11
0
0
13 Jun 2020
Compressive sensing with un-trained neural networks: Gradient descent
  finds the smoothest approximation
Compressive sensing with un-trained neural networks: Gradient descent finds the smoothest approximation
Reinhard Heckel
Mahdi Soltanolkotabi
6
79
0
07 May 2020
Depth-2 Neural Networks Under a Data-Poisoning Attack
Depth-2 Neural Networks Under a Data-Poisoning Attack
Sayar Karmakar
Anirbit Mukherjee
Ramchandran Muthukumar
6
7
0
04 May 2020
LOCA: LOcal Conformal Autoencoder for standardized data coordinates
LOCA: LOcal Conformal Autoencoder for standardized data coordinates
Erez Peterfreund
Ofir Lindenbaum
Felix Dietrich
Tom S. Bertalan
M. Gavish
Ioannis G. Kevrekidis
Ronald R. Coifman
79
22
0
15 Apr 2020
Self-Adaptive Training: beyond Empirical Risk Minimization
Self-Adaptive Training: beyond Empirical Risk Minimization
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
NoLa
21
197
0
24 Feb 2020
On the Role of Dataset Quality and Heterogeneity in Model Confidence
On the Role of Dataset Quality and Heterogeneity in Model Confidence
Yuan Zhao
Jiasi Chen
Samet Oymak
22
12
0
23 Feb 2020
Learning Not to Learn in the Presence of Noisy Labels
Learning Not to Learn in the Presence of Noisy Labels
Liu Ziyin
Blair Chen
Ru Wang
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
Masahito Ueda
NoLa
20
18
0
16 Feb 2020
Identifying Mislabeled Data using the Area Under the Margin Ranking
Identifying Mislabeled Data using the Area Under the Margin Ranking
Geoff Pleiss
Tianyi Zhang
Ethan R. Elenberg
Kilian Q. Weinberger
NoLa
32
261
0
28 Jan 2020
How does Early Stopping Help Generalization against Label Noise?
How does Early Stopping Help Generalization against Label Noise?
Hwanjun Song
Minseok Kim
Dongmin Park
Jae-Gil Lee
NoLa
17
75
0
19 Nov 2019
Denoising and Regularization via Exploiting the Structural Bias of
  Convolutional Generators
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Reinhard Heckel
Mahdi Soltanolkotabi
DiffM
32
81
0
31 Oct 2019
Image recognition from raw labels collected without annotators
Image recognition from raw labels collected without annotators
Fatih Yilmaz
Reinhard Heckel
NoLa
17
7
0
20 Oct 2019
Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge
  Utilizing Anisotropic Information Retrieval For Overparameterized Neural
  Network
Distillation ≈\approx≈ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network
Bin Dong
Jikai Hou
Yiping Lu
Zhihua Zhang
12
40
0
02 Oct 2019
Generalization Guarantees for Neural Networks via Harnessing the
  Low-rank Structure of the Jacobian
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
19
88
0
12 Jun 2019
Stable Rank Normalization for Improved Generalization in Neural Networks
  and GANs
Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
Amartya Sanyal
Philip H. S. Torr
P. Dokania
36
43
0
11 Jun 2019
The Convergence Rate of Neural Networks for Learned Functions of
  Different Frequencies
The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies
Ronen Basri
David Jacobs
Yoni Kasten
S. Kritchman
8
215
0
02 Jun 2019
Simple and Effective Regularization Methods for Training on Noisily
  Labeled Data with Generalization Guarantee
Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee
Wei Hu
Zhiyuan Li
Dingli Yu
NoLa
15
12
0
27 May 2019
On Learning Over-parameterized Neural Networks: A Functional
  Approximation Perspective
On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective
Lili Su
Pengkun Yang
MLT
14
54
0
26 May 2019
Overparameterized Nonlinear Learning: Gradient Descent Takes the
  Shortest Path?
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Samet Oymak
Mahdi Soltanolkotabi
ODL
6
177
0
25 Dec 2018
Stochastic Gradient Descent Learns State Equations with Nonlinear
  Activations
Stochastic Gradient Descent Learns State Equations with Nonlinear Activations
Samet Oymak
11
43
0
09 Sep 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
296
2,890
0
15 Sep 2016
Previous
1234