Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.08514
Cited By
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
21 January 2022
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis"
49 / 49 papers shown
Title
Retraining with Predicted Hard Labels Provably Increases Model Accuracy
Rudrajit Das
Inderjit S Dhillon
Alessandro Epasto
Adel Javanmard
Jieming Mao
Vahab Mirrokni
Sujay Sanghavi
Peilin Zhong
94
2
0
17 Jun 2024
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
66
13
0
12 Oct 2021
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Colin Wei
Kendrick Shen
Yining Chen
Tengyu Ma
SSL
75
230
0
07 Oct 2020
Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
MLT
AI4CE
103
34
0
25 Jun 2020
Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training
Samet Oymak
Talha Cihad Gulcu
60
20
0
19 Jun 2020
Self-training Avoids Using Spurious Features Under Domain Shift
Yining Chen
Colin Wei
Ananya Kumar
Tengyu Ma
OOD
85
85
0
17 Jun 2020
Rethinking Pre-training and Self-training
Barret Zoph
Golnaz Ghiasi
Nayeon Lee
Huayu Chen
Hanxiao Liu
E. D. Cubuk
Quoc V. Le
SSeg
85
651
0
11 Jun 2020
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
AAML
84
228
0
25 Feb 2020
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Kihyuk Sohn
David Berthelot
Chun-Liang Li
Zizhao Zhang
Nicholas Carlini
E. D. Cubuk
Alexey Kurakin
Han Zhang
Colin Raffel
AAML
155
3,549
0
21 Jan 2020
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
307
2,386
0
11 Nov 2019
When Does Self-supervision Improve Few-shot Learning?
Jong-Chyi Su
Subhransu Maji
B. Hariharan
66
170
0
08 Oct 2019
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
273
272
0
30 Sep 2019
Self-Training for End-to-End Speech Recognition
Jacob Kahn
Ann Lee
Awni Y. Hannun
SSL
58
235
0
19 Sep 2019
Deep Self-Learning From Noisy Labels
Jiangfan Han
Ping Luo
Xiaogang Wang
NoLa
63
280
0
06 Aug 2019
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
121
752
0
31 May 2019
MixMatch: A Holistic Approach to Semi-Supervised Learning
David Berthelot
Nicholas Carlini
Ian Goodfellow
Nicolas Papernot
Avital Oliver
Colin Raffel
142
3,026
0
06 May 2019
Billion-scale semi-supervised learning for image classification
I. Z. Yalniz
Hervé Jégou
Kan Chen
Manohar Paluri
D. Mahajan
SSL
90
463
0
02 May 2019
Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild
Kibok Lee
Kimin Lee
Jinwoo Shin
Honglak Lee
CLL
100
206
0
29 Mar 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
195
972
0
24 Jan 2019
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
183
769
0
12 Nov 2018
Learning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
160
69
0
05 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
214
1,272
0
04 Oct 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
54
131
0
14 Aug 2018
Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data
Yuanzhi Li
Yingyu Liang
MLT
216
653
0
03 Aug 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
106
134
0
20 Jun 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
267
3,195
0
20 Jun 2018
End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Samet Oymak
Mahdi Soltanolkotabi
72
12
0
16 May 2018
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy
H. Fu
Yuejie Chi
Yingbin Liang
FedML
64
39
0
18 Feb 2018
Spurious Local Minima are Common in Two-Layer ReLU Neural Networks
Itay Safran
Ohad Shamir
175
263
0
24 Dec 2017
Learning One-hidden-layer Neural Networks with Landscape Design
Rong Ge
Jason D. Lee
Tengyu Ma
MLT
196
261
0
01 Nov 2017
Deep Neural Networks as Gaussian Processes
Jaehoon Lee
Yasaman Bahri
Roman Novak
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
UQCV
BDL
118
1,093
0
01 Nov 2017
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
Alon Brutzkus
Amir Globerson
Eran Malach
Shai Shalev-Shwartz
MLT
151
279
0
27 Oct 2017
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
165
419
0
16 Jul 2017
Recovery Guarantees for One-hidden-layer Neural Networks
Kai Zhong
Zhao Song
Prateek Jain
Peter L. Bartlett
Inderjit S. Dhillon
MLT
167
336
0
10 Jun 2017
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Yuanzhi Li
Yang Yuan
MLT
150
651
0
28 May 2017
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Takeru Miyato
S. Maeda
Masanori Koyama
S. Ishii
GAN
146
2,733
0
13 Apr 2017
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Alon Brutzkus
Amir Globerson
MLT
165
313
0
26 Feb 2017
Temporal Ensembling for Semi-Supervised Learning
S. Laine
Timo Aila
UQCV
185
2,555
0
07 Oct 2016
Domain Separation Networks
Konstantinos Bousmalis
George Trigeorgis
N. Silberman
Dilip Krishnan
D. Erhan
OOD
107
1,450
0
22 Aug 2016
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
Mehdi S. M. Sajjadi
Mehran Javanmardi
Tolga Tasdizen
BDL
82
1,112
0
14 Jun 2016
ℓ
1
\ell_1
ℓ
1
-regularized Neural Networks are Improperly Learnable in Polynomial Time
Yuchen Zhang
Jason D. Lee
Michael I. Jordan
184
102
0
13 Oct 2015
Domain-Adversarial Training of Neural Networks
Yaroslav Ganin
E. Ustinova
Hana Ajakan
Pascal Germain
Hugo Larochelle
François Laviolette
M. Marchand
Victor Lempitsky
GAN
OOD
372
9,486
0
28 May 2015
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
463
43,289
0
11 Feb 2015
Learning Transferable Features with Deep Adaptation Networks
Mingsheng Long
Yue Cao
Jianmin Wang
Michael I. Jordan
OOD
220
5,196
0
10 Feb 2015
Tensor Factorization via Matrix Factorization
Volodymyr Kuleshov
Arun Tejasvi Chaganty
Percy Liang
90
85
0
29 Jan 2015
Training Deep Neural Networks on Noisy Labels with Bootstrapping
Scott E. Reed
Honglak Lee
Dragomir Anguelov
Christian Szegedy
D. Erhan
Andrew Rabinovich
NoLa
111
1,021
0
20 Dec 2014
Learning with Pseudo-Ensembles
Philip Bachman
O. Alsharif
Doina Precup
76
598
0
16 Dec 2014
Deep Domain Confusion: Maximizing for Domain Invariance
Eric Tzeng
Judy Hoffman
Ning Zhang
Kate Saenko
Trevor Darrell
OOD
172
2,601
0
10 Dec 2014
Unsupervised Domain Adaptation by Backpropagation
Yaroslav Ganin
Victor Lempitsky
OOD
233
6,022
0
26 Sep 2014
1