ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.11680
  4. Cited By
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks

Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks

27 March 2019
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
    NoLa
ArXivPDFHTML

Papers citing "Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"

50 / 176 papers shown
Title
Combining Self-labeling with Selective Sampling
Combining Self-labeling with Selective Sampling
Jedrzej Kozal
Michal Wo'zniak
26
3
0
11 Jan 2023
PADDLES: Phase-Amplitude Spectrum Disentangled Early Stopping for
  Learning with Noisy Labels
PADDLES: Phase-Amplitude Spectrum Disentangled Early Stopping for Learning with Noisy Labels
Huaxi Huang
Hui-Sung Kang
Sheng Liu
Olivier Salvado
Thierry Rakotoarivelo
Dadong Wang
Tongliang Liu
NoLa
25
7
0
07 Dec 2022
Margin-based sampling in high dimensions: When being active is less
  efficient than staying passive
Margin-based sampling in high dimensions: When being active is less efficient than staying passive
A. Tifrea
Jacob Clarysse
Fanny Yang
27
2
0
01 Dec 2022
Noisy Pairing and Partial Supervision for Opinion Summarization
Noisy Pairing and Partial Supervision for Opinion Summarization
Hayate Iso
Xiaolan Wang
Yoshihiko Suhara
AI4TS
13
0
0
16 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
38
11
0
15 Nov 2022
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal Transport
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
22
6
0
02 Nov 2022
Deep Learning is Provably Robust to Symmetric Label Noise
Deep Learning is Provably Robust to Symmetric Label Noise
Carey E. Priebe
Ningyuan Huang
Soledad Villar
Cong Mu
Li-Wei Chen
NoLa
21
2
0
26 Oct 2022
Automatic Data Augmentation via Invariance-Constrained Learning
Automatic Data Augmentation via Invariance-Constrained Learning
Ignacio Hounie
Luiz F. O. Chamon
Alejandro Ribeiro
23
10
0
29 Sep 2022
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training
  Dynamics
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics
Shoaib Ahmed Siddiqui
Nitarshan Rajkumar
Tegan Maharaj
David M. Krueger
Sara Hooker
42
27
0
20 Sep 2022
Stability and Generalization Analysis of Gradient Methods for Shallow
  Neural Networks
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
Yunwen Lei
Rong Jin
Yiming Ying
MLT
37
18
0
19 Sep 2022
Intersection of Parallels as an Early Stopping Criterion
Intersection of Parallels as an Early Stopping Criterion
Ali Vardasbi
Maarten de Rijke
Mostafa Dehghani
MoMe
38
5
0
19 Aug 2022
Labeling Chaos to Learning Harmony: Federated Learning with Noisy Labels
Labeling Chaos to Learning Harmony: Federated Learning with Noisy Labels
Vasileios Tsouvalas
Aaqib Saeed
T. Ozcelebi
N. Meratnia
FedML
25
12
0
19 Aug 2022
CTRL: Clustering Training Losses for Label Error Detection
CTRL: Clustering Training Losses for Label Error Detection
C. Yue
N. Jha
NoLa
41
14
0
17 Aug 2022
On the Activation Function Dependence of the Spectral Bias of Neural
  Networks
On the Activation Function Dependence of the Spectral Bias of Neural Networks
Q. Hong
Jonathan W. Siegel
Qinyan Tan
Jinchao Xu
34
23
0
09 Aug 2022
MarkerMap: nonlinear marker selection for single-cell studies
MarkerMap: nonlinear marker selection for single-cell studies
Nabeel Sarwar
Wilson Gregory
George A. Kevrekidis
Soledad Villar
Bianca Dumitrascu
6
4
0
28 Jul 2022
Identifying Hard Noise in Long-Tailed Sample Distribution
Identifying Hard Noise in Long-Tailed Sample Distribution
Xuanyu Yi
Kaihua Tang
Xiansheng Hua
J. Lim
Hanwang Zhang
22
23
0
27 Jul 2022
ProMix: Combating Label Noise via Maximizing Clean Sample Utility
ProMix: Combating Label Noise via Maximizing Clean Sample Utility
Rui Xiao
Yiwen Dong
Haobo Wang
Lei Feng
Runze Wu
Gang Chen
J. Zhao
24
54
0
21 Jul 2022
Uncertainty-Aware Learning Against Label Noise on Imbalanced Datasets
Uncertainty-Aware Learning Against Label Noise on Imbalanced Datasets
Yingsong Huang
Bing Bai
Shengwei Zhao
Kun Bai
Fei-Yue Wang
NoLa
28
43
0
12 Jul 2022
Fairness via In-Processing in the Over-parameterized Regime: A
  Cautionary Tale
Fairness via In-Processing in the Over-parameterized Regime: A Cautionary Tale
A. Veldanda
Ivan Brugere
Jiahao Chen
Sanghamitra Dutta
Alan Mishler
S. Garg
33
7
0
29 Jun 2022
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
74
27
0
17 Jun 2022
Spectral Bias Outside the Training Set for Deep Networks in the Kernel
  Regime
Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime
Benjamin Bowman
Guido Montúfar
22
14
0
06 Jun 2022
Robust Fine-Tuning of Deep Neural Networks with Hessian-based
  Generalization Guarantees
Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees
Haotian Ju
Dongyue Li
Hongyang R. Zhang
43
28
0
06 Jun 2022
Robust Meta-learning with Sampling Noise and Label Noise via
  Eigen-Reptile
Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile
Dong Chen
Lingfei Wu
Siliang Tang
Xiao Yun
Bo Long
Yueting Zhuang
VLM
NoLa
25
9
0
04 Jun 2022
Towards a Defense Against Federated Backdoor Attacks Under Continuous
  Training
Towards a Defense Against Federated Backdoor Attacks Under Continuous Training
Shuai Wang
J. Hayase
Giulia Fanti
Sewoong Oh
FedML
23
5
0
24 May 2022
AdaCap: Adaptive Capacity control for Feed-Forward Neural Networks
AdaCap: Adaptive Capacity control for Feed-Forward Neural Networks
Katia Méziani
Karim Lounici
Benjamin Riu
6
0
0
09 May 2022
On Learning Contrastive Representations for Learning with Noisy Labels
On Learning Contrastive Representations for Learning with Noisy Labels
Linya Yi
Sheng Liu
Qi She
A. McLeod
Boyu Wang
NoLa
16
40
0
03 Mar 2022
Robust Training under Label Noise by Over-parameterization
Robust Training under Label Noise by Over-parameterization
Sheng Liu
Zhihui Zhu
Qing Qu
Chong You
NoLa
OOD
27
106
0
28 Feb 2022
Explicit Regularization via Regularizer Mirror Descent
Explicit Regularization via Regularizer Mirror Descent
Navid Azizan
Sahin Lale
B. Hassibi
12
4
0
22 Feb 2022
On Optimal Early Stopping: Over-informative versus Under-informative
  Parametrization
On Optimal Early Stopping: Over-informative versus Under-informative Parametrization
Ruoqi Shen
Liyao (Mars) Gao
Yi Ma
14
13
0
20 Feb 2022
Random Feature Amplification: Feature Learning and Generalization in
  Neural Networks
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
30
29
0
15 Feb 2022
Towards Data-Algorithm Dependent Generalization: a Case Study on
  Overparameterized Linear Regression
Towards Data-Algorithm Dependent Generalization: a Case Study on Overparameterized Linear Regression
Jing Xu
Jiaye Teng
Yang Yuan
Andrew Chi-Chih Yao
23
1
0
12 Feb 2022
Maximum Likelihood Uncertainty Estimation: Robustness to Outliers
Maximum Likelihood Uncertainty Estimation: Robustness to Outliers
Deebul Nair
Nico Hochgeschwender
Miguel A. Olivares-Mendez
OOD
30
7
0
03 Feb 2022
Do We Need to Penalize Variance of Losses for Learning with Label Noise?
Do We Need to Penalize Variance of Losses for Learning with Label Noise?
Yexiong Lin
Yu Yao
Yuxuan Du
Jun Yu
Bo Han
Biwei Huang
Tongliang Liu
NoLa
53
3
0
30 Jan 2022
A Stochastic Bundle Method for Interpolating Networks
A Stochastic Bundle Method for Interpolating Networks
Alasdair Paren
Leonard Berrada
Rudra P. K. Poudel
M. P. Kumar
24
4
0
29 Jan 2022
Overview frequency principle/spectral bias in deep learning
Overview frequency principle/spectral bias in deep learning
Z. Xu
Yaoyu Zhang
Tao Luo
FaML
33
66
0
19 Jan 2022
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
Vitaly Kurin
Alessandro De Palma
Ilya Kostrikov
Shimon Whiteson
M. P. Kumar
39
73
0
11 Jan 2022
Provable Hierarchical Lifelong Learning with a Sketch-based Modular
  Architecture
Provable Hierarchical Lifelong Learning with a Sketch-based Modular Architecture
Zihao Deng
Zee Fryer
Brendan Juba
Rina Panigrahy
Xin Wang
14
2
0
21 Dec 2021
Rethinking Influence Functions of Neural Networks in the
  Over-parameterized Regime
Rethinking Influence Functions of Neural Networks in the Over-parameterized Regime
Rui Zhang
Shihua Zhang
TDI
24
21
0
15 Dec 2021
Robust Neural Network Classification via Double Regularization
Robust Neural Network Classification via Double Regularization
Olof Zetterqvist
Rebecka Jörnsten
J. Jonasson
9
1
0
15 Dec 2021
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
38
16
0
05 Dec 2021
Learning with Noisy Labels by Efficient Transition Matrix Estimation to
  Combat Label Miscorrection
Learning with Noisy Labels by Efficient Transition Matrix Estimation to Combat Label Miscorrection
Seong Min Kye
Kwanghee Choi
Joonyoung Yi
Buru Chang
NoLa
33
15
0
29 Nov 2021
Constrained Instance and Class Reweighting for Robust Learning under
  Label Noise
Constrained Instance and Class Reweighting for Robust Learning under Label Noise
Abhishek Kumar
Ehsan Amid
NoLa
29
19
0
09 Nov 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
27
31
0
02 Nov 2021
Mitigating Memorization of Noisy Labels via Regularization between
  Representations
Mitigating Memorization of Noisy Labels via Regularization between Representations
Hao Cheng
Zhaowei Zhu
Xing Sun
Yang Liu
NoLa
38
28
0
18 Oct 2021
Pro-KD: Progressive Distillation by Following the Footsteps of the
  Teacher
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Mehdi Rezagholizadeh
A. Jafari
Puneeth Salad
Pranav Sharma
Ali Saheb Pasand
A. Ghodsi
79
17
0
16 Oct 2021
Detecting Corrupted Labels Without Training a Model to Predict
Detecting Corrupted Labels Without Training a Model to Predict
Zhaowei Zhu
Zihao Dong
Yang Liu
NoLa
149
62
0
12 Oct 2021
Robustness and Reliability When Training With Noisy Labels
Robustness and Reliability When Training With Noisy Labels
Amanda Olmin
Fredrik Lindsten
OOD
NoLa
16
14
0
07 Oct 2021
Consistency Regularization Can Improve Robustness to Label Noise
Consistency Regularization Can Improve Robustness to Label Noise
Erik Englesson
Hossein Azizpour
NoLa
97
20
0
04 Oct 2021
When are Deep Networks really better than Decision Forests at small
  sample sizes, and how?
When are Deep Networks really better than Decision Forests at small sample sizes, and how?
Haoyin Xu
K. A. Kinfu
Will LeVine
Sambit Panda
Jayanta Dey
...
M. Kusmanov
F. Engert
Christopher M. White
Joshua T. Vogelstein
Carey E. Priebe
17
23
0
31 Aug 2021
Heavy-tailed Streaming Statistical Estimation
Heavy-tailed Streaming Statistical Estimation
Che-Ping Tsai
Adarsh Prasad
Sivaraman Balakrishnan
Pradeep Ravikumar
28
10
0
25 Aug 2021
Previous
1234
Next