Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.01348
Cited By
On the Efficacy of Knowledge Distillation
3 October 2019
Ligang He
Rui Mao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Efficacy of Knowledge Distillation"
50 / 319 papers shown
Title
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Chen Liang
Haoming Jiang
Zheng Li
Xianfeng Tang
Bin Yin
Tuo Zhao
VLM
29
24
0
19 Feb 2023
Self-Supervised Node Representation Learning via Node-to-Neighbourhood Alignment
Wei Dong
Dawei Yan
Peifeng Wang
SSL
19
2
0
09 Feb 2023
Knowledge Distillation-based Information Sharing for Online Process Monitoring in Decentralized Manufacturing System
Zhangyue Shi
Yuxuan Li
Chenang Liu
29
8
0
08 Feb 2023
On student-teacher deviations in distillation: does it pay to disobey?
Vaishnavh Nagarajan
A. Menon
Srinadh Bhojanapalli
H. Mobahi
Surinder Kumar
43
9
0
30 Jan 2023
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
32
12
0
28 Jan 2023
Improving Text-based Early Prediction by Distillation from Privileged Time-Series Text
Jinghui Liu
Daniel Capurro
Anthony N. Nguyen
Karin Verspoor
AI4TS
23
3
0
26 Jan 2023
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
Sucheng Ren
Fangyun Wei
Zheng-Wei Zhang
Han Hu
42
35
0
03 Jan 2023
Publishing Efficient On-device Models Increases Adversarial Vulnerability
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
AAML
38
2
0
28 Dec 2022
MAViL: Masked Audio-Video Learners
Po-Yao (Bernie) Huang
Vasu Sharma
Hu Xu
Chaitanya K. Ryali
Haoqi Fan
Yanghao Li
Shang-Wen Li
Gargi Ghosh
Jitendra Malik
Christoph Feichtenhofer
26
51
0
15 Dec 2022
FlexiViT: One Model for All Patch Sizes
Lucas Beyer
Pavel Izmailov
Alexander Kolesnikov
Mathilde Caron
Simon Kornblith
Xiaohua Zhai
Matthias Minderer
Michael Tschannen
Ibrahim M. Alabdulmohsin
Filip Pavetić
VLM
55
90
0
15 Dec 2022
ResNet Structure Simplification with the Convolutional Kernel Redundancy Measure
Hongzhi Zhu
R. Rohling
Septimiu Salcudean
17
0
0
01 Dec 2022
Expanding Small-Scale Datasets with Guided Imagination
Yifan Zhang
Daquan Zhou
Bryan Hooi
Kaixin Wang
Jiashi Feng
49
46
0
25 Nov 2022
Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation
Jiawei Du
Yiding Jiang
Vincent Y. F. Tan
Qiufeng Wang
Haizhou Li
DD
43
110
0
20 Nov 2022
D
3
^3
3
ETR: Decoder Distillation for Detection Transformer
Xiaokang Chen
Jiahui Chen
Yong-Jin Liu
Gang Zeng
42
16
0
17 Nov 2022
Distilling Representations from GAN Generator via Squeeze and Span
Yu Yang
Xiaotian Cheng
Chang-rui Liu
Hakan Bilen
Xiang Ji
GAN
33
0
0
06 Nov 2022
Self Similarity Matrix based CNN Filter Pruning
S. Rakshith
Jayesh Rajkumar Vachhani
Sourabh Vasant Gothe
Rishabh Khurana
30
0
0
03 Nov 2022
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
On effects of Knowledge Distillation on Transfer Learning
Sushil Thapa
24
1
0
18 Oct 2022
Efficient Knowledge Distillation from Model Checkpoints
Chaofei Wang
Qisen Yang
Rui Huang
S. Song
Gao Huang
FedML
14
35
0
12 Oct 2022
Pre-Training Representations of Binary Code Using Contrastive Learning
Yifan Zhang
Chen Huang
Yueke Zhang
Kevin Cao
Scott Thomas Andersen
Huajie Shao
Kevin Leach
Yu Huang
57
3
0
11 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
59
30
0
10 Oct 2022
Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis
Xu Yan
Heshen Zhan
Chaoda Zheng
Jiantao Gao
Ruimao Zhang
Shuguang Cui
Zhen Li
3DPC
57
33
0
09 Oct 2022
Robust Active Distillation
Cenk Baykal
Khoa Trinh
Fotis Iliopoulos
Gaurav Menghani
Erik Vee
39
10
0
03 Oct 2022
Using Knowledge Distillation to improve interpretable models in a retail banking context
Maxime Biehler
Mohamed Guermazi
Célim Starck
62
2
0
30 Sep 2022
Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation
Jiawei Liang
Siyuan Liang
Aishan Liu
Ke Ma
Jingzhi Li
Xiaochun Cao
VLM
53
15
0
20 Sep 2022
Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition
Ye Bai
Jie Li
W. Han
Hao Ni
Kaituo Xu
Zhuo Zhang
Cheng Yi
Xiaorui Wang
MoE
31
1
0
17 Sep 2022
CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation
Ibtihel Amara
M. Ziaeefard
B. Meyer
W. Gross
J. Clark
23
4
0
15 Sep 2022
Switchable Online Knowledge Distillation
Biao Qian
Yang Wang
Hongzhi Yin
Richang Hong
Meng Wang
66
39
0
12 Sep 2022
Continual Learning for Pose-Agnostic Object Recognition in 3D Point Clouds
Xihao Wang
Xian Wei
3DPC
43
5
0
11 Sep 2022
Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective
Jiangmeng Li
Yanan Zhang
Jingyao Wang
Hui Xiong
Chengbo Jiao
Xiaohui Hu
Changwen Zheng
Gang Hua
CML
42
28
0
26 Aug 2022
Masked Autoencoders Enable Efficient Knowledge Distillers
Yutong Bai
Zeyu Wang
Junfei Xiao
Chen Wei
Huiyu Wang
Alan Yuille
Yuyin Zhou
Cihang Xie
CLL
32
40
0
25 Aug 2022
Effectiveness of Function Matching in Driving Scene Recognition
Shingo Yashima
26
1
0
20 Aug 2022
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Manzil Zaheer
A. S. Rawat
Seungyeon Kim
Chong You
Himanshu Jain
Andreas Veit
Rob Fergus
Surinder Kumar
VLM
18
2
0
14 Aug 2022
Self-Knowledge Distillation via Dropout
Hyoje Lee
Yeachan Park
Hyun Seo
Myung-joo Kang
FedML
21
15
0
11 Aug 2022
Overlooked Poses Actually Make Sense: Distilling Privileged Knowledge for Human Motion Prediction
Xiaoning Sun
Qiongjie Cui
Huaijiang Sun
Bin Li
Weiqing Li
Jianfeng Lu
34
7
0
02 Aug 2022
PEA: Improving the Performance of ReLU Networks for Free by Using Progressive Ensemble Activations
Á. Utasi
35
0
0
28 Jul 2022
Efficient One Pass Self-distillation with Zipf's Label Smoothing
Jiajun Liang
Linze Li
Z. Bing
Borui Zhao
Yao Tang
Bo Lin
Haoqiang Fan
28
19
0
26 Jul 2022
Federated Semi-Supervised Domain Adaptation via Knowledge Transfer
Madhureeta Das
Xianhao Chen
Xiaoyong Yuan
Lan Zhang
11
2
0
21 Jul 2022
Model Compression for Resource-Constrained Mobile Robots
Timotheos Souroulla
Alberto Hata
Ahmad Terra
Özer Özkahraman
Rafia Inam
13
0
0
20 Jul 2022
Teachers in concordance for pseudo-labeling of 3D sequential data
Awet Haileslassie Gebrehiwot
Patrik Vacek
David Hurych
Karel Zimmermann
P. Pérez
Tomáš Svoboda
3DPC
22
6
0
13 Jul 2022
ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image Segmentation
Ziyuan Zhao
An Zhu
Zeng Zeng
B. Veeravalli
Cuntai Guan
27
9
0
05 Jul 2022
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
32
3
0
02 Jul 2022
Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition
Kai Zhen
Hieu Duy Nguyen
Ravi Chinta
Nathan Susanj
Athanasios Mouchtaris
Tariq Afzal
Ariya Rastrow
MQ
28
11
0
30 Jun 2022
CGAR: Critic Guided Action Redistribution in Reinforcement Leaning
Tairan Huang
Xu Li
Haoyuan Li
Mingming Sun
P. Li
18
0
0
23 Jun 2022
Revisiting Self-Distillation
M. Pham
Minsu Cho
Ameya Joshi
C. Hegde
23
22
0
17 Jun 2022
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Jack G. M. FitzGerald
Shankar Ananthakrishnan
Konstantine Arkoudas
Davide Bernardi
Abhishek Bhagia
...
Pan Wei
Haiyang Yu
Shuai Zheng
Gokhan Tur
Premkumar Natarajan
ELM
14
30
0
15 Jun 2022
Toward Student-Oriented Teacher Network Training For Knowledge Distillation
Chengyu Dong
Liyuan Liu
Jingbo Shang
46
6
0
14 Jun 2022
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation
Zihui Xue
Zhengqi Gao
Sucheng Ren
Hang Zhao
27
36
0
13 Jun 2022
Crowd Localization from Gaussian Mixture Scoped Knowledge and Scoped Teacher
Juncheng Wang
Junyuan Gao
Yuan. Yuan
Qi. Wang
40
18
0
12 Jun 2022
Distilling Knowledge from Object Classification to Aesthetics Assessment
Jingwen Hou
Henghui Ding
Weisi Lin
Weide Liu
Yuming Fang
19
35
0
02 Jun 2022
Previous
1
2
3
4
5
6
7
Next