Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.05835
Cited By
Variational Information Distillation for Knowledge Transfer
11 April 2019
Sungsoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Variational Information Distillation for Knowledge Transfer"
50 / 321 papers shown
Title
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing
Songling Zhu
Ronghua Shang
Bo Yuan
Weitong Zhang
Yangyang Li
Licheng Jiao
35
7
0
09 May 2023
Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with Uncertainty
Yuan Zhang
Weihua Chen
Yichen Lu
Tao Huang
Xiuyu Sun
Jian Cao
52
8
0
04 May 2023
Self-discipline on multiple channels
Jiutian Zhao
Liangchen Luo
Hao Wang
32
0
0
27 Apr 2023
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
49
18
0
24 Apr 2023
eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation
Libo Huang
Yan Zeng
Chuanguang Yang
Zhulin An
Boyu Diao
Yongjun Xu
CLL
19
2
0
20 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
18
1
0
19 Apr 2023
A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving Services
Dewant Katare
Diego Perino
J. Nurmi
M. Warnier
Marijn Janssen
Aaron Yi Ding
34
37
0
13 Apr 2023
Distilling Token-Pruned Pose Transformer for 2D Human Pose Estimation
Feixiang Ren
ViT
21
2
0
12 Apr 2023
Homogenizing Non-IID datasets via In-Distribution Knowledge Distillation for Decentralized Learning
Deepak Ravikumar
Gobinda Saha
Sai Aparna Aketi
Kaushik Roy
21
2
0
09 Apr 2023
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation
Yang Jin
Mengke Li
Yang Lu
Y. Cheung
Hanzi Wang
40
21
0
03 Apr 2023
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
Ximeng Sun
Pengchuan Zhang
Peizhao Zhang
Hardik Shah
Kate Saenko
Xide Xia
VLM
25
20
0
31 Mar 2023
Information-Theoretic GAN Compression with Variational Energy-based Model
Minsoo Kang
Hyewon Yoo
Eunhee Kang
Sehwan Ki
Hyong-Euk Lee
Bohyung Han
GAN
31
3
0
28 Mar 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
46
57
0
28 Mar 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
21
13
0
16 Mar 2023
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Kaiqi Zhao
Yitao Chen
Ming Zhao
VLM
23
2
0
14 Mar 2023
Enhancing Low-resolution Face Recognition with Feature Similarity Knowledge Distillation
Sungho Shin
Yeonguk Yu
Kyoobin Lee
CVBM
24
3
0
08 Mar 2023
Generic-to-Specific Distillation of Masked Autoencoders
Wei Huang
Zhiliang Peng
Li Dong
Furu Wei
Jianbin Jiao
QiXiang Ye
32
22
0
28 Feb 2023
Leveraging Angular Distributions for Improved Knowledge Distillation
Eunyeong Jeon
Hongjun Choi
Ankita Shukla
Pavan Turaga
11
8
0
27 Feb 2023
Two-in-one Knowledge Distillation for Efficient Facial Forgery Detection
Chu Zhou
Jiajun Huang
Daochang Liu
Chengbin Du
Siqi Ma
Surya Nepal
Chang Xu
33
0
0
21 Feb 2023
SLaM: Student-Label Mixing for Distillation with Unlabeled Examples
Vasilis Kontonis
Fotis Iliopoulos
Khoa Trinh
Cenk Baykal
Gaurav Menghani
Erik Vee
32
7
0
08 Feb 2023
Knowledge Distillation on Graphs: A Survey
Yijun Tian
Shichao Pei
Xiangliang Zhang
Chuxu Zhang
Nitesh V. Chawla
23
28
0
01 Feb 2023
Understanding Self-Distillation in the Presence of Label Noise
Rudrajit Das
Sujay Sanghavi
41
15
0
30 Jan 2023
RNAS-CL: Robust Neural Architecture Search by Cross-Layer Knowledge Distillation
Utkarsh Nath
Yancheng Wang
Yingzhen Yang
AAML
34
2
0
19 Jan 2023
Prototype-guided Cross-task Knowledge Distillation for Large-scale Models
Deng Li
Aming Wu
Yahong Han
Qingwen Tian
VLM
33
2
0
26 Dec 2022
Boosting Urban Traffic Speed Prediction via Integrating Implicit Spatial Correlations
Dongkun Wang
Wei Fan
Pengyang Wang
P. Wang
Dongjie Wang
Denghui Zhang
Yanjie Fu
43
0
0
25 Dec 2022
Training Lightweight Graph Convolutional Networks with Phase-field Models
H. Sahbi
29
0
0
19 Dec 2022
Gait Recognition Using 3-D Human Body Shape Inference
Haidong Zhu
Zhao-Heng Zheng
Ramkant Nevatia
CVBM
3DH
36
23
0
18 Dec 2022
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with Similarity-Keeping Knowledge Distillation
Xin Huang
Sung-Yu Chen
Chun-Shu Wei
16
0
0
06 Dec 2022
Leveraging Different Learning Styles for Improved Knowledge Distillation in Biomedical Imaging
Usma Niyaz
A. Sambyal
Deepti R. Bathula
25
0
0
06 Dec 2022
Hint-dynamic Knowledge Distillation
Yiyang Liu
Chenxin Li
Xiaotong Tu
Xinghao Ding
Yue Huang
14
1
0
30 Nov 2022
Knowledge Distillation based Degradation Estimation for Blind Super-Resolution
Bin Xia
Yulun Zhang
Yitong Wang
Yapeng Tian
Wenming Yang
Radu Timofte
Luc Van Gool
42
29
0
30 Nov 2022
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
33
133
0
29 Nov 2022
Decentralized Learning with Multi-Headed Distillation
A. Zhmoginov
Mark Sandler
Nolan Miller
Gus Kristiansen
Max Vladymyrov
FedML
40
4
0
28 Nov 2022
Class-aware Information for Logit-based Knowledge Distillation
Shuoxi Zhang
Hanpeng Liu
J. Hopcroft
Kun He
30
2
0
27 Nov 2022
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
Linfeng Zhang
Yukang Shi
Hung-Shuo Tai
Zhipeng Zhang
Yuan He
Ke Wang
Kaisheng Ma
26
2
0
14 Nov 2022
An Interpretable Neuron Embedding for Static Knowledge Distillation
Wei Han
Yang Wang
Christian Böhm
Junming Shao
23
0
0
14 Nov 2022
Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study
Hongjun Choi
Eunyeong Jeon
Ankita Shukla
Pavan Turaga
23
8
0
08 Nov 2022
Distilling Representations from GAN Generator via Squeeze and Span
Yu Yang
Xiaotian Cheng
Chang-rui Liu
Hakan Bilen
Xiang Ji
GAN
33
0
0
06 Nov 2022
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Masud An Nur Islam Fahim
Jani Boutellier
40
0
0
01 Nov 2022
Multimodal Transformer Distillation for Audio-Visual Synchronization
Xuan-Bo Chen
Haibin Wu
Chung-Che Wang
Hung-yi Lee
J. Jang
26
3
0
27 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
22
3
0
25 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
29
7
0
18 Oct 2022
Weighted Distillation with Unlabeled Examples
Fotis Iliopoulos
Vasilis Kontonis
Cenk Baykal
Gaurav Menghani
Khoa Trinh
Erik Vee
14
12
0
13 Oct 2022
Efficient Knowledge Distillation from Model Checkpoints
Chaofei Wang
Qisen Yang
Rui Huang
S. Song
Gao Huang
FedML
14
35
0
12 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
57
30
0
10 Oct 2022
Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis
Xu Yan
Heshen Zhan
Chaoda Zheng
Jiantao Gao
Ruimao Zhang
Shuguang Cui
Zhen Li
3DPC
57
33
0
09 Oct 2022
Meta-Ensemble Parameter Learning
Zhengcong Fei
Shuman Tian
Junshi Huang
Xiaoming Wei
Xiaolin K. Wei
OOD
44
2
0
05 Oct 2022
Attention Distillation: self-supervised vision transformer students need more guidance
Kai Wang
Fei Yang
Joost van de Weijer
ViT
30
16
0
03 Oct 2022
Towards a Unified View of Affinity-Based Knowledge Distillation
Vladimir Li
A. Maki
14
0
0
30 Sep 2022
Previous
1
2
3
4
5
6
7
Next