ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.12399
  4. Cited By
Conditional Teacher-Student Learning

Conditional Teacher-Student Learning

28 April 2019
Zhong Meng
Jinyu Li
Yong Zhao
Jiawei Liu
ArXivPDFHTML

Papers citing "Conditional Teacher-Student Learning"

17 / 17 papers shown
Title
Understanding the Capabilities and Limitations of Weak-to-Strong Generalization
Understanding the Capabilities and Limitations of Weak-to-Strong Generalization
Wei Yao
Wenkai Yang
Zihan Wang
Yankai Lin
Yong Liu
ELM
107
1
0
03 Feb 2025
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Chaojian Li
Wenwan Chen
Jiayi Yuan
Yingyan Lin
Ashutosh Sabharwal
25
0
0
19 Mar 2023
Practical Knowledge Distillation: Using DNNs to Beat DNNs
Practical Knowledge Distillation: Using DNNs to Beat DNNs
Chungman Lee
Pavlos Anastasios Apostolopulos
Igor L. Markov
FedML
27
1
0
23 Feb 2023
Enabling All In-Edge Deep Learning: A Literature Review
Enabling All In-Edge Deep Learning: A Literature Review
Praveen Joshi
Mohammed Hasanuzzaman
Chandra Thapa
Haithem Afli
T. Scully
43
22
0
07 Apr 2022
Bridging the Gap Between Patient-specific and Patient-independent
  Seizure Prediction via Knowledge Distillation
Bridging the Gap Between Patient-specific and Patient-independent Seizure Prediction via Knowledge Distillation
Di Wu
Jie Yang
Mohamad Sawan
FedML
45
21
0
25 Feb 2022
Learn From the Past: Experience Ensemble Knowledge Distillation
Learn From the Past: Experience Ensemble Knowledge Distillation
Chaofei Wang
Shaowei Zhang
S. Song
Gao Huang
33
4
0
25 Feb 2022
Sequence-level self-learning with multiple hypotheses
Sequence-level self-learning with multiple hypotheses
K. Kumatani
Dimitrios Dimitriadis
Yashesh Gaur
R. Gmyr
Sefik Emre Eskimez
Jinyu Li
Michael Zeng
SSL
25
1
0
10 Dec 2021
A Light-weight Deep Human Activity Recognition Algorithm Using
  Multi-knowledge Distillation
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation
Runze Chen
Haiyong Luo
Fang Zhao
Xuechun Meng
Zhiqing Xie
Yida Zhu
VLM
HAI
24
2
0
06 Jul 2021
Multi-Level Transfer Learning from Near-Field to Far-Field Speaker
  Verification
Multi-Level Transfer Learning from Near-Field to Far-Field Speaker Verification
Li Zhang
Qing Wang
Kong Aik Lee
Lei Xie
Haizhou Li
30
13
0
17 Jun 2021
Balanced Knowledge Distillation for Long-tailed Learning
Balanced Knowledge Distillation for Long-tailed Learning
Shaoyu Zhang
Chen Chen
Xiyuan Hu
Silong Peng
48
57
0
21 Apr 2021
Internal Language Model Estimation for Domain-Adaptive End-to-End Speech
  Recognition
Internal Language Model Estimation for Domain-Adaptive End-to-End Speech Recognition
Zhong Meng
S. Parthasarathy
Eric Sun
Yashesh Gaur
Naoyuki Kanda
Liang Lu
Xie Chen
Rui Zhao
Jinyu Li
Jiawei Liu
AuLLM
19
107
0
03 Nov 2020
Classification of Diabetic Retinopathy Using Unlabeled Data and
  Knowledge Distillation
Classification of Diabetic Retinopathy Using Unlabeled Data and Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
P. Khadivi
N. Karimi
Roshank Roshandel
S. Shirani
S. Samavi
14
18
0
01 Sep 2020
Relational Teacher Student Learning with Neural Label Embedding for
  Device Adaptation in Acoustic Scene Classification
Relational Teacher Student Learning with Neural Label Embedding for Device Adaptation in Acoustic Scene Classification
Hu Hu
Sabato Marco Siniscalchi
Yannan Wang
Chin-Hui Lee
27
12
0
31 Jul 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
L-Vector: Neural Label Embedding for Domain Adaptation
L-Vector: Neural Label Embedding for Domain Adaptation
Zhong Meng
Hu Hu
Jinyu Li
Changliang Liu
Yan-ping Huang
Jiawei Liu
Chin-Hui Lee
14
24
0
25 Apr 2020
Modeling Teacher-Student Techniques in Deep Neural Networks for
  Knowledge Distillation
Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation
Sajjad Abbasi
M. Hajabdollahi
N. Karimi
S. Samavi
10
28
0
31 Dec 2019
1