ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.03622
  4. Cited By
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled
  Data

Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data

7 October 2020
Colin Wei
Kendrick Shen
Yining Chen
Tengyu Ma
    SSL
ArXivPDFHTML

Papers citing "Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data"

50 / 57 papers shown
Title
References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation
References Indeed Matter? Reference-Free Preference Optimization for Conversational Query Reformulation
Doyoung Kim
Youngjun Lee
Joeun Kim
Jihwan Bang
Hwanjun Song
Susik Yoon
Jae-Gil Lee
29
0
0
10 May 2025
Weakly Supervised Contrastive Adversarial Training for Learning Robust Features from Semi-supervised Data
Weakly Supervised Contrastive Adversarial Training for Learning Robust Features from Semi-supervised Data
Lilin Zhang
Chengpei Wu
Ning Yang
34
0
0
14 Mar 2025
Weak-to-Strong Generalization Through the Data-Centric Lens
Weak-to-Strong Generalization Through the Data-Centric Lens
Changho Shin
John Cooper
Frederic Sala
88
5
0
05 Dec 2024
Infinite Width Limits of Self Supervised Neural Networks
Maximilian Fleissner
Gautham Govind Anil
D. Ghoshdastidar
SSL
145
0
0
17 Nov 2024
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
M. E. Ildiz
Halil Alperen Gozeten
Ege Onur Taga
Marco Mondelli
Samet Oymak
54
2
0
24 Oct 2024
Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Jingyang Li
Jiachun Pan
Vincent Y. F. Tan
Kim-Chuan Toh
Pan Zhou
AAML
MLT
45
0
0
15 Oct 2024
Investigating the Impact of Model Complexity in Large Language Models
Investigating the Impact of Model Complexity in Large Language Models
Jing Luo
Huiyuan Wang
Weiran Huang
34
0
0
01 Oct 2024
Retraining with Predicted Hard Labels Provably Increases Model Accuracy
Retraining with Predicted Hard Labels Provably Increases Model Accuracy
Rudrajit Das
Inderjit S Dhillon
Alessandro Epasto
Adel Javanmard
Jieming Mao
Vahab Mirrokni
Sujay Sanghavi
Peilin Zhong
46
1
0
17 Jun 2024
Theoretical Analysis of Weak-to-Strong Generalization
Theoretical Analysis of Weak-to-Strong Generalization
Hunter Lang
David Sontag
Aravindan Vijayaraghavan
21
19
0
25 May 2024
Uncertainty in Graph Neural Networks: A Survey
Uncertainty in Graph Neural Networks: A Survey
Fangxin Wang
Yuqing Liu
Kay Liu
Yibo Wang
Sourav Medya
Philip S. Yu
AI4CE
48
8
0
11 Mar 2024
Can semi-supervised learning use all the data effectively? A lower bound
  perspective
Can semi-supervised learning use all the data effectively? A lower bound perspective
Alexandru cTifrea
Gizem Yüce
Amartya Sanyal
Fanny Yang
34
3
0
30 Nov 2023
Cross-Domain HAR: Few Shot Transfer Learning for Human Activity
  Recognition
Cross-Domain HAR: Few Shot Transfer Learning for Human Activity Recognition
Megha Thukral
H. Haresamudram
Thomas Ploetz
26
4
0
22 Oct 2023
Deep Insights into Noisy Pseudo Labeling on Graph Data
Deep Insights into Noisy Pseudo Labeling on Graph Data
Botao Wang
Jia Li
Yang Liu
Jiashun Cheng
Yu Rong
Wenjia Wang
Fugee Tsung
NoLa
27
8
0
02 Oct 2023
Collaborative Learning via Prediction Consensus
Collaborative Learning via Prediction Consensus
Dongyang Fan
Celestine Mendler-Dünner
Martin Jaggi
FedML
29
7
0
29 May 2023
Density Ratio Estimation-based Bayesian Optimization with
  Semi-Supervised Learning
Density Ratio Estimation-based Bayesian Optimization with Semi-Supervised Learning
Jungtaek Kim
32
1
0
24 May 2023
Universal Domain Adaptation from Foundation Models: A Baseline Study
Universal Domain Adaptation from Foundation Models: A Baseline Study
Bin Deng
K. Jia
VLM
21
6
0
18 May 2023
DAC-MR: Data Augmentation Consistency Based Meta-Regularization for
  Meta-Learning
DAC-MR: Data Augmentation Consistency Based Meta-Regularization for Meta-Learning
Jun Shu
Xiang Yuan
Deyu Meng
Zongben Xu
28
4
0
13 May 2023
On the Stepwise Nature of Self-Supervised Learning
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
32
29
0
27 Mar 2023
Learning with Explanation Constraints
Learning with Explanation Constraints
Rattana Pukdee
Dylan Sam
J. Zico Kolter
Maria-Florina Balcan
Pradeep Ravikumar
FAtt
32
6
0
25 Mar 2023
DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text
  Generation
DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation
Yuxi Feng
Xiaoyuan Yi
Xiting Wang
L. Lakshmanan
Xing Xie
DiffM
27
5
0
16 Dec 2022
Augmentation Invariant Manifold Learning
Augmentation Invariant Manifold Learning
Shulei Wang
33
1
0
01 Nov 2022
Multiple Instance Learning via Iterative Self-Paced Supervised
  Contrastive Learning
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Kangning Liu
Weicheng Zhu
Yiqiu Shen
Sheng Liu
N. Razavian
Krzysztof J. Geras
C. Fernandez‐Granda
SSL
28
24
0
17 Oct 2022
Label Propagation with Weak Supervision
Label Propagation with Weak Supervision
Rattana Pukdee
Dylan Sam
Maria-Florina Balcan
Pradeep Ravikumar
34
9
0
07 Oct 2022
What shapes the loss landscape of self-supervised learning?
What shapes the loss landscape of self-supervised learning?
Liu Ziyin
Ekdeep Singh Lubana
Masakuni Ueda
Hidenori Tanaka
48
20
0
02 Oct 2022
Joint Embedding Self-Supervised Learning in the Kernel Regime
Joint Embedding Self-Supervised Learning in the Kernel Regime
B. Kiani
Randall Balestriero
Yubei Chen
S. Lloyd
Yann LeCun
SSL
41
13
0
29 Sep 2022
MaxMatch: Semi-Supervised Learning with Worst-Case Consistency
MaxMatch: Semi-Supervised Learning with Worst-Case Consistency
Yangbangyan Jiang Xiaodan Li
Xiaodan Li
YueFeng Chen
Yuan He
Qianqian Xu
Zhiyong Yang
Xiaochun Cao
Qingming Huang
9
18
0
26 Sep 2022
Analyzing Data-Centric Properties for Graph Contrastive Learning
Analyzing Data-Centric Properties for Graph Contrastive Learning
Puja Trivedi
Ekdeep Singh Lubana
Mark Heimann
Danai Koutra
Jayaraman J. Thiagarajan
26
11
0
04 Aug 2022
GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for
  Robust Electrocardiogram Prediction
GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction
Jiacheng Zhu
Jielin Qiu
Zhuolin Yang
Douglas Weber
M. Rosenberg
Emerson Liu
Bo-wen Li
Ding Zhao
OOD
28
13
0
02 Aug 2022
Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
  Scene Segmentation with Limited Annotations
Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical Scene Segmentation with Limited Annotations
Yang Yu
Zixu Zhao
Yueming Jin
Guangyong Chen
Qi Dou
Pheng-Ann Heng
21
4
0
20 Jul 2022
Gradual Domain Adaptation without Indexed Intermediate Domains
Gradual Domain Adaptation without Indexed Intermediate Domains
Hong-You Chen
Wei-Lun Chao
CLL
57
39
0
11 Jul 2022
Empirical Evaluation and Theoretical Analysis for Representation
  Learning: A Survey
Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey
Kento Nozawa
Issei Sato
AI4TS
19
4
0
18 Apr 2022
MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency
  Regularization
MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization
Yue Duan
Zhen Zhao
Lei Qi
Lei Wang
Luping Zhou
Yinghuan Shi
Yang Gao
28
33
0
27 Mar 2022
Sample Efficiency of Data Augmentation Consistency Regularization
Sample Efficiency of Data Augmentation Consistency Regularization
Shuo Yang
Yijun Dong
Rachel A. Ward
Inderjit S. Dhillon
Sujay Sanghavi
Qi Lei
AAML
23
17
0
24 Feb 2022
An Information-theoretical Approach to Semi-supervised Learning under
  Covariate-shift
An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift
Gholamali Aminian
Mahed Abroshan
Mohammad Mahdi Khalili
Laura Toni
M. Rodrigues
OOD
18
27
0
24 Feb 2022
Self-Training: A Survey
Self-Training: A Survey
Massih-Reza Amini
Vasilii Feofanov
Loïc Pauletto
Lies Hadjadj
Emilie Devijver
Yury Maximov
SSL
28
102
0
24 Feb 2022
Debiased Self-Training for Semi-Supervised Learning
Debiased Self-Training for Semi-Supervised Learning
Baixu Chen
Junguang Jiang
Ximei Wang
Pengfei Wan
Jianmin Wang
Mingsheng Long
29
85
0
15 Feb 2022
A Characterization of Semi-Supervised Adversarially-Robust PAC
  Learnability
A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability
Idan Attias
Steve Hanneke
Yishay Mansour
30
15
0
11 Feb 2022
How does unlabeled data improve generalization in self-training? A
  one-hidden-layer theoretical analysis
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
M. Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
39
22
0
21 Jan 2022
Semi-supervised Domain Adaptive Structure Learning
Semi-supervised Domain Adaptive Structure Learning
Can Qin
Lichen Wang
Qianqian Ma
Yu Yin
Huan Wang
Y. Fu
TTA
39
20
0
12 Dec 2021
Enhancing Counterfactual Classification via Self-Training
Enhancing Counterfactual Classification via Self-Training
Ruijiang Gao
Max Biggs
Wei-Ju Sun
Ligong Han
CML
OffRL
26
6
0
08 Dec 2021
Towards the Generalization of Contrastive Self-Supervised Learning
Towards the Generalization of Contrastive Self-Supervised Learning
Weiran Huang
Mingyang Yi
Xuyang Zhao
Zihao Jiang
SSL
21
105
0
01 Nov 2021
X-model: Improving Data Efficiency in Deep Learning with A Minimax Model
X-model: Improving Data Efficiency in Deep Learning with A Minimax Model
Ximei Wang
Xinyang Chen
Jianmin Wang
Mingsheng Long
19
1
0
09 Oct 2021
On the Surrogate Gap between Contrastive and Supervised Losses
On the Surrogate Gap between Contrastive and Supervised Losses
Han Bao
Yoshihiro Nagano
Kento Nozawa
SSL
UQCV
37
19
0
06 Oct 2021
Task-adaptive Pre-training and Self-training are Complementary for
  Natural Language Understanding
Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding
Shiyang Li
Semih Yavuz
Wenhu Chen
Xifeng Yan
20
12
0
14 Sep 2021
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
  of Head and Prompt Tuning
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Colin Wei
Sang Michael Xie
Tengyu Ma
22
96
0
17 Jun 2021
Generate, Annotate, and Learn: NLP with Synthetic Text
Generate, Annotate, and Learn: NLP with Synthetic Text
Xuanli He
Islam Nassar
J. Kiros
Gholamreza Haffari
Mohammad Norouzi
31
51
0
11 Jun 2021
Toward Understanding the Feature Learning Process of Self-supervised
  Contrastive Learning
Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning
Zixin Wen
Yuanzhi Li
SSL
MLT
24
131
0
31 May 2021
Cross-Referencing Self-Training Network for Sound Event Detection in
  Audio Mixtures
Cross-Referencing Self-Training Network for Sound Event Detection in Audio Mixtures
Sangwook Park
D. Han
Mounya Elhilali
22
12
0
27 May 2021
Cycle Self-Training for Domain Adaptation
Cycle Self-Training for Domain Adaptation
Hong Liu
Jianmin Wang
Mingsheng Long
25
174
0
05 Mar 2021
Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream
  Data? A Theoretical Analysis
Can Pretext-Based Self-Supervised Learning Be Boosted by Downstream Data? A Theoretical Analysis
Jiaye Teng
Weiran Huang
Haowei He
SSL
26
11
0
05 Mar 2021
12
Next