ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11368
  4. Cited By
Simple and Effective Regularization Methods for Training on Noisily
  Labeled Data with Generalization Guarantee
v1v2v3v4 (latest)

Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee

27 May 2019
Wei Hu
Zhiyuan Li
Dingli Yu
    NoLa
ArXiv (abs)PDFHTML

Papers citing "Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee"

13 / 13 papers shown
Title
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian
  Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Greg Yang
169
289
0
13 Feb 2019
Generalization in Deep Networks: The Role of Distance from
  Initialization
Generalization in Deep Networks: The Role of Distance from Initialization
Vaishnavh Nagarajan
J. Zico Kolter
ODL
81
96
0
07 Jan 2019
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
196
448
0
21 Nov 2018
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
201
775
0
12 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLTODL
227
1,276
0
04 Oct 2018
Learning Overparameterized Neural Networks via Stochastic Gradient
  Descent on Structured Data
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
219
653
0
03 Aug 2018
Learning to Reweight Examples for Robust Deep Learning
Learning to Reweight Examples for Robust Deep Learning
Mengye Ren
Wenyuan Zeng
Binh Yang
R. Urtasun
OODNoLa
149
1,431
0
24 Mar 2018
MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks
  on Corrupted Labels
MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
Lu Jiang
Zhengyuan Zhou
Thomas Leung
Li Li
Li Fei-Fei
NoLa
122
1,456
0
14 Dec 2017
Early stopping for kernel boosting algorithms: A general analysis with
  localized complexities
Early stopping for kernel boosting algorithms: A general analysis with localized complexities
Yuting Wei
Fanny Yang
Martin J. Wainwright
66
77
0
05 Jul 2017
Learning From Noisy Large-Scale Datasets With Minimal Supervision
Learning From Noisy Large-Scale Datasets With Minimal Supervision
Andreas Veit
N. Alldrin
Gal Chechik
Ivan Krasin
Abhinav Gupta
Serge J. Belongie
139
480
0
06 Jan 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
348
4,635
0
10 Nov 2016
Training Convolutional Networks with Noisy Labels
Training Convolutional Networks with Noisy Labels
Sainbayar Sukhbaatar
Joan Bruna
Manohar Paluri
Lubomir D. Bourdev
Rob Fergus
NoLa
92
272
0
09 Jun 2014
A tail inequality for quadratic forms of subgaussian random vectors
A tail inequality for quadratic forms of subgaussian random vectors
Daniel J. Hsu
Sham Kakade
Tong Zhang
139
422
0
13 Oct 2011
1