Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.17332
Cited By
Teacher-Student Architecture for Knowledge Learning: A Survey
28 October 2022
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Teacher-Student Architecture for Knowledge Learning: A Survey"
37 / 37 papers shown
Title
FLINT: Learning-based Flow Estimation and Temporal Interpolation for Scientific Ensemble Visualization
Hamid Gadirov
Jos B. T. M. Roerdink
Steffen Frey
AI4CE
116
1
0
24 Feb 2025
Information Theoretic Representation Distillation
Roy Miles
Adrian Lopez-Rodriguez
K. Mikolajczyk
MQ
63
22
0
01 Dec 2021
Multi-Task Self-Training for Learning General Representations
Golnaz Ghiasi
Barret Zoph
E. D. Cubuk
Quoc V. Le
Nayeon Lee
SSL
58
101
0
25 Aug 2021
Adaptive Multi-Teacher Multi-level Knowledge Distillation
Yuang Liu
Wei Zhang
Jun Wang
59
157
0
06 Mar 2021
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
158
101
0
12 Feb 2021
Knowledge Distillation in Deep Learning and its Applications
Abdolmaged Alkhulaifi
Fahad Alsahli
Irfan Ahmad
FedML
35
78
0
17 Jul 2020
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
69
283
0
12 Jun 2020
Peer Collaborative Learning for Online Knowledge Distillation
Guile Wu
S. Gong
FedML
39
129
0
07 Jun 2020
Deep Learning for Wireless Communications
T. Erpek
Tim O'Shea
Y. Sagduyu
Yi Shi
T. Clancy
79
138
0
12 May 2020
A Simple Semi-Supervised Learning Framework for Object Detection
Kihyuk Sohn
Zizhao Zhang
Chun-Liang Li
Han Zhang
Chen-Yu Lee
Tomas Pfister
75
496
0
10 May 2020
Self-trained Deep Ordinal Regression for End-to-End Video Anomaly Detection
Guansong Pang
Cheng Yan
Chunhua Shen
Anton Van Den Hengel
Xiao Bai
59
209
0
15 Mar 2020
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
75
128
0
05 Feb 2020
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
69
44
0
29 Nov 2019
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
296
2,387
0
11 Nov 2019
Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings
Paul Bergmann
Michael Fauser
David Sattlegger
C. Steger
72
663
0
06 Nov 2019
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition
Mengjia Yan
Mengao Zhao
Zining Xu
Qian Zhang
Guoli Wang
Zhizhong Su
CVBM
56
92
0
11 Oct 2019
Knowledge Distillation from Internal Representations
Gustavo Aguilar
Yuan Ling
Yu Zhang
Benjamin Yao
Xing Fan
Edward Guo
70
181
0
08 Oct 2019
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
208
7,481
0
02 Oct 2019
TinyBERT: Distilling BERT for Natural Language Understanding
Xiaoqi Jiao
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
F. Wang
Qun Liu
VLM
92
1,857
0
23 Sep 2019
Self-Knowledge Distillation in Natural Language Processing
Sangchul Hahn
Heeyoul Choi
62
111
0
02 Aug 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
131
18,058
0
28 May 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
60
857
0
17 May 2019
Bidirectional Learning for Domain Adaptation of Semantic Segmentation
Yunsheng Li
Lu Yuan
Nuno Vasconcelos
SSeg
93
627
0
24 Apr 2019
Feature Fusion for Online Mutual Knowledge Distillation
Jangho Kim
Minsung Hyun
Inseop Chung
Nojun Kwak
FedML
60
91
0
19 Apr 2019
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
86
510
0
03 Apr 2019
Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System
Jiaxi Tang
Ke Wang
62
188
0
19 Sep 2018
Emotion Recognition in Speech using Cross-Modal Transfer in the Wild
Samuel Albanie
Arsha Nagrani
Andrea Vedaldi
Andrew Zisserman
CVBM
53
271
0
16 Aug 2018
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
275
476
0
12 Jun 2018
Learning to Adapt Structured Output Space for Semantic Segmentation
Yi-Hsuan Tsai
Wei-Chih Hung
S. Schulter
Kihyuk Sohn
Ming-Hsuan Yang
Manmohan Chandraker
OOD
SSeg
135
1,543
0
28 Feb 2018
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
Judy Hoffman
Eric Tzeng
Taesung Park
Jun-Yan Zhu
Phillip Isola
Kate Saenko
Alexei A. Efros
Trevor Darrell
136
3,001
0
08 Nov 2017
Large-Scale Domain Adaptation via Teacher-Student Learning
Jinyu Li
M. Seltzer
Xi Wang
Rui Zhao
Jiawei Liu
134
140
0
17 Aug 2017
Deep Mutual Learning
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
FedML
143
1,650
0
01 Jun 2017
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Jun-Yan Zhu
Taesung Park
Phillip Isola
Alexei A. Efros
GAN
111
5,554
0
30 Mar 2017
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
113
2,569
0
12 Dec 2016
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
330
7,478
0
02 Dec 2016
Learning without Forgetting
Zhizhong Li
Derek Hoiem
CLL
OOD
SSL
282
4,391
0
29 Jun 2016
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
286
3,870
0
19 Dec 2014
1