ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.03530
  4. Cited By
Understanding deep learning requires rethinking generalization

Understanding deep learning requires rethinking generalization

10 November 2016
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
    HAI
ArXivPDFHTML

Papers citing "Understanding deep learning requires rethinking generalization"

50 / 1,027 papers shown
Title
Posterior Meta-Replay for Continual Learning
Posterior Meta-Replay for Continual Learning
Christian Henning
Maria R. Cervera
Francesco DÁngelo
J. Oswald
Regina Traber
Benjamin Ehret
Seijin Kobayashi
Benjamin Grewe
João Sacramento
CLL
BDL
51
55
0
01 Mar 2021
DST: Data Selection and joint Training for Learning with Noisy Labels
DST: Data Selection and joint Training for Learning with Noisy Labels
Yi Wei
Xue Mei
Xin Liu
Pengxiang Xu
NoLa
27
3
0
01 Mar 2021
Neuron Coverage-Guided Domain Generalization
Neuron Coverage-Guided Domain Generalization
Chris Xing Tian
Haoliang Li
Xiaofei Xie
Yang Liu
Shiqi Wang
23
35
0
27 Feb 2021
FINE Samples for Learning with Noisy Labels
FINE Samples for Learning with Noisy Labels
Taehyeon Kim
Jongwoo Ko
Sangwook Cho
J. Choi
Se-Young Yun
NoLa
38
103
0
23 Feb 2021
SVRG Meets AdaGrad: Painless Variance Reduction
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
Dissecting Supervised Contrastive Learning
Dissecting Supervised Contrastive Learning
Florian Graf
Christoph Hofer
Marc Niethammer
Roland Kwitt
SSL
117
70
0
17 Feb 2021
Evaluating Multi-label Classifiers with Noisy Labels
Evaluating Multi-label Classifiers with Noisy Labels
Wenting Zhao
Carla P. Gomes
NoLa
74
14
0
16 Feb 2021
Learning curves of generic features maps for realistic datasets with a
  teacher-student model
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
35
136
0
16 Feb 2021
Training Larger Networks for Deep Reinforcement Learning
Training Larger Networks for Deep Reinforcement Learning
Keita Ota
Devesh K. Jha
Asako Kanezaki
OffRL
25
39
0
16 Feb 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
S. Feizi
AAML
37
45
0
15 Feb 2021
Learning by Turning: Neural Architecture Aware Optimisation
Learning by Turning: Neural Architecture Aware Optimisation
Yang Liu
Jeremy Bernstein
M. Meister
Yisong Yue
ODL
43
26
0
14 Feb 2021
Technical Challenges for Training Fair Neural Networks
Technical Challenges for Training Fair Neural Networks
Valeriia Cherepanova
V. Nanda
Micah Goldblum
John P. Dickerson
Tom Goldstein
FaML
25
22
0
12 Feb 2021
Understanding the Interaction of Adversarial Training with Noisy Labels
Understanding the Interaction of Adversarial Training with Noisy Labels
Jianing Zhu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Hongxia Yang
Mohan Kankanhalli
Masashi Sugiyama
AAML
27
27
0
06 Feb 2021
On the Reproducibility of Neural Network Predictions
On the Reproducibility of Neural Network Predictions
Srinadh Bhojanapalli
Kimberly Wilber
Andreas Veit
A. S. Rawat
Seungyeon Kim
A. Menon
Sanjiv Kumar
29
35
0
05 Feb 2021
Provably End-to-end Label-Noise Learning without Anchor Points
Provably End-to-end Label-Noise Learning without Anchor Points
Xuefeng Li
Tongliang Liu
Bo Han
Gang Niu
Masashi Sugiyama
NoLa
133
121
0
04 Feb 2021
SGD Generalizes Better Than GD (And Regularization Doesn't Help)
SGD Generalizes Better Than GD (And Regularization Doesn't Help)
I Zaghloul Amir
Tomer Koren
Roi Livni
29
46
0
01 Feb 2021
A Convergence Theory Towards Practical Over-parameterized Deep Neural
  Networks
A Convergence Theory Towards Practical Over-parameterized Deep Neural Networks
Asaf Noy
Yi Tian Xu
Y. Aflalo
Lihi Zelnik-Manor
Rong Jin
41
3
0
12 Jan 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine
  Learning
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACV
FedML
82
216
0
11 Jan 2021
Provable Generalization of SGD-trained Neural Networks of Any Width in
  the Presence of Adversarial Label Noise
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
Spencer Frei
Yuan Cao
Quanquan Gu
FedML
MLT
70
19
0
04 Jan 2021
A Second-Order Approach to Learning with Instance-Dependent Label Noise
A Second-Order Approach to Learning with Instance-Dependent Label Noise
Zhaowei Zhu
Tongliang Liu
Yang Liu
NoLa
22
126
0
22 Dec 2020
Neural Joint Entropy Estimation
Neural Joint Entropy Estimation
Yuval Shalev
Amichai Painsky
I. Ben-Gal
34
8
0
21 Dec 2020
Multi-Label Noise Robust Collaborative Learning for Remote Sensing Image
  Classification
Multi-Label Noise Robust Collaborative Learning for Remote Sensing Image Classification
A. Aksoy
Mahdyar Ravanbakhsh
Begüm Demir
35
24
0
19 Dec 2020
Attentional-Biased Stochastic Gradient Descent
Attentional-Biased Stochastic Gradient Descent
Q. Qi
Yi Tian Xu
Rong Jin
W. Yin
Tianbao Yang
ODL
31
12
0
13 Dec 2020
Beyond Class-Conditional Assumption: A Primary Attempt to Combat
  Instance-Dependent Label Noise
Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise
Pengfei Chen
Junjie Ye
Guangyong Chen
Jingwei Zhao
Pheng-Ann Heng
NoLa
40
123
0
10 Dec 2020
A Topological Filter for Learning with Label Noise
A Topological Filter for Learning with Label Noise
Pengxiang Wu
Songzhu Zheng
Mayank Goswami
Dimitris N. Metaxas
Chao Chen
NoLa
30
112
0
09 Dec 2020
Multi-Objective Interpolation Training for Robustness to Label Noise
Multi-Objective Interpolation Training for Robustness to Label Noise
Diego Ortego
Eric Arazo
Paul Albert
Noel E. O'Connor
Kevin McGuinness
NoLa
30
112
0
08 Dec 2020
Robustness of Accuracy Metric and its Inspirations in Learning with
  Noisy Labels
Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels
Pengfei Chen
Junjie Ye
Guangyong Chen
Jingwei Zhao
Pheng-Ann Heng
NoLa
103
34
0
08 Dec 2020
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary
  Gates and $L_0$ Regularization
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary Gates and L0L_0L0​ Regularization
Yaniv Shulman
46
3
0
07 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
355
0
07 Dec 2020
Cross-Layer Distillation with Semantic Calibration
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
45
287
0
06 Dec 2020
Every Model Learned by Gradient Descent Is Approximately a Kernel
  Machine
Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
Pedro M. Domingos
MLT
29
71
0
30 Nov 2020
Overcoming Measurement Inconsistency in Deep Learning for Linear Inverse
  Problems: Applications in Medical Imaging
Overcoming Measurement Inconsistency in Deep Learning for Linear Inverse Problems: Applications in Medical Imaging
Marija Vella
João F. C. Mota
19
4
0
29 Nov 2020
Rethinking Generalization in American Sign Language Prediction for Edge
  Devices with Extremely Low Memory Footprint
Rethinking Generalization in American Sign Language Prediction for Edge Devices with Extremely Low Memory Footprint
A. Paul
P. Mohan
Stuti Sehgal
14
17
0
27 Nov 2020
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
50
258
0
18 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
25
12
0
17 Nov 2020
Chaos and Complexity from Quantum Neural Network: A study with Diffusion
  Metric in Machine Learning
Chaos and Complexity from Quantum Neural Network: A study with Diffusion Metric in Machine Learning
S. Choudhury
Ankan Dutta
Debisree Ray
22
21
0
16 Nov 2020
Artificial Neural Variability for Deep Learning: On Overfitting, Noise
  Memorization, and Catastrophic Forgetting
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting
Zeke Xie
Fengxiang He
Shaopeng Fu
Issei Sato
Dacheng Tao
Masashi Sugiyama
21
60
0
12 Nov 2020
A Survey of Label-noise Representation Learning: Past, Present and
  Future
A Survey of Label-noise Representation Learning: Past, Present and Future
Bo Han
Quanming Yao
Tongliang Liu
Gang Niu
Ivor W. Tsang
James T. Kwok
Masashi Sugiyama
NoLa
24
159
0
09 Nov 2020
Teaching with Commentaries
Teaching with Commentaries
Aniruddh Raghu
M. Raghu
Simon Kornblith
David Duvenaud
Geoffrey E. Hinton
27
24
0
05 Nov 2020
Generalized Negative Correlation Learning for Deep Ensembling
Generalized Negative Correlation Learning for Deep Ensembling
Sebastian Buschjäger
Lukas Pfahler
K. Morik
FedML
BDL
UQCV
11
17
0
05 Nov 2020
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent
  with Moderate Learning Rate
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
Jingfeng Wu
Difan Zou
Vladimir Braverman
Quanquan Gu
8
18
0
04 Nov 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
39
93
0
04 Nov 2020
Latent Causal Invariant Model
Latent Causal Invariant Model
Xinwei Sun
Botong Wu
Xiangyu Zheng
Chang-Shu Liu
Wei Chen
Tao Qin
Tie-Yan Liu
OOD
CML
BDL
29
14
0
04 Nov 2020
Understanding Capacity-Driven Scale-Out Neural Recommendation Inference
Understanding Capacity-Driven Scale-Out Neural Recommendation Inference
Michael Lui
Yavuz Yetim
Özgür Özkan
Zhuoran Zhao
Shin-Yeh Tsai
Carole-Jean Wu
Mark Hempstead
GNN
BDL
LRM
22
51
0
04 Nov 2020
Instance based Generalization in Reinforcement Learning
Instance based Generalization in Reinforcement Learning
Martín Bertrán
Natalia Martínez
Mariano Phielipp
Guillermo Sapiro
OffRL
27
16
0
02 Nov 2020
A Bayesian Perspective on Training Speed and Model Selection
A Bayesian Perspective on Training Speed and Model Selection
Clare Lyle
Lisa Schut
Binxin Ru
Y. Gal
Mark van der Wilk
44
24
0
27 Oct 2020
Memorizing without overfitting: Bias, variance, and interpolation in
  over-parameterized models
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
23
41
0
26 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
An Investigation of how Label Smoothing Affects Generalization
An Investigation of how Label Smoothing Affects Generalization
Blair Chen
Liu Ziyin
Zihao Wang
Paul Pu Liang
UQCV
21
17
0
23 Oct 2020
Deep Learning is Singular, and That's Good
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
24
26
0
22 Oct 2020
Previous
123...91011...192021
Next