ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01348
  4. Cited By
On the Efficacy of Knowledge Distillation

On the Efficacy of Knowledge Distillation

3 October 2019
Ligang He
Rui Mao
ArXivPDFHTML

Papers citing "On the Efficacy of Knowledge Distillation"

19 / 319 papers shown
Title
Fast, Accurate, and Simple Models for Tabular Data via Augmented
  Distillation
Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation
Rasool Fakoor
Jonas W. Mueller
Nick Erickson
Pratik Chaudhari
Alex Smola
26
54
0
25 Jun 2020
Paying more attention to snapshots of Iterative Pruning: Improving Model
  Compression via Ensemble Distillation
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
25
7
0
20 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
23
2,851
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge
  Distillation
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
29
18
0
06 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Channel Distillation: Channel-Wise Attention for Knowledge Distillation
Channel Distillation: Channel-Wise Attention for Knowledge Distillation
Zaida Zhou
Chaoran Zhuge
Xinwei Guan
Wen Liu
16
49
0
02 Jun 2020
Sub-Band Knowledge Distillation Framework for Speech Enhancement
Sub-Band Knowledge Distillation Framework for Speech Enhancement
Xiang Hao
Shi-Xue Wen
Xiangdong Su
Yun Liu
Guanglai Gao
Xiaofei Li
27
18
0
29 May 2020
Language Model Prior for Low-Resource Neural Machine Translation
Language Model Prior for Low-Resource Neural Machine Translation
Christos Baziotis
Barry Haddow
Alexandra Birch
18
53
0
30 Apr 2020
Filter Grafting for Deep Neural Networks: Reason, Method, and
  Cultivation
Filter Grafting for Deep Neural Networks: Reason, Method, and Cultivation
Hao Cheng
Fanxu Meng
Ke Li
Huixiang Luo
Guangming Lu
Xing Sun
Feiyue Huang
18
0
0
26 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
15
92
0
24 Mar 2020
Knowledge distillation via adaptive instance normalization
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly
  Convolutional Neural Network
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network
Wonchul Son
Youngbin Kim
Wonseok Song
Youngsuk Moon
Wonjun Hwang
14
0
0
09 Mar 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training
  and Inference of Transformers
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
22
148
0
26 Feb 2020
QUEST: Quantized embedding space for transferring knowledge
QUEST: Quantized embedding space for transferring knowledge
Himalaya Jain
Spyros Gidaris
N. Komodakis
P. Pérez
Matthieu Cord
21
14
0
03 Dec 2019
Knowledge Transfer Graph for Deep Collaborative Learning
Knowledge Transfer Graph for Deep Collaborative Learning
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
30
9
0
10 Sep 2019
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
212
474
0
12 Jun 2018
Previous
1234567