ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.05835
  4. Cited By
Variational Information Distillation for Knowledge Transfer

Variational Information Distillation for Knowledge Transfer

11 April 2019
Sungsoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
ArXivPDFHTML

Papers citing "Variational Information Distillation for Knowledge Transfer"

50 / 321 papers shown
Title
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge
  Distillation
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
13
120
0
15 Mar 2021
A New Training Framework for Deep Neural Network
Zhenyan Hou
Wenxuan Fan
FedML
18
2
0
12 Mar 2021
Doubly Contrastive Deep Clustering
Doubly Contrastive Deep Clustering
Zhiyuan Dang
Cheng Deng
Xu Yang
Heng-Chiao Huang
SSL
16
15
0
09 Mar 2021
PURSUhInT: In Search of Informative Hint Points Based on Layer
  Clustering for Knowledge Distillation
PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation
Reyhan Kevser Keser
Aydin Ayanzadeh
O. A. Aghdam
Çaglar Kilcioglu
B. U. Toreyin
N. K. Üre
29
6
0
26 Feb 2021
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen
  Regularization Imposed by Self-Distillation
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation
Kenneth Borup
L. Andersen
25
14
0
25 Feb 2021
AlphaNet: Improved Training of Supernets with Alpha-Divergence
AlphaNet: Improved Training of Supernets with Alpha-Divergence
Dilin Wang
Chengyue Gong
Meng Li
Qiang Liu
Vikas Chandra
155
44
0
16 Feb 2021
Semantically-Conditioned Negative Samples for Efficient Contrastive
  Learning
Semantically-Conditioned Negative Samples for Efficient Contrastive Learning
J. Ó. Neill
Danushka Bollegala
33
6
0
12 Feb 2021
Learning Student-Friendly Teacher Networks for Knowledge Distillation
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
121
101
0
12 Feb 2021
Show, Attend and Distill:Knowledge Distillation via Attention-based
  Feature Matching
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
65
143
0
05 Feb 2021
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance
  Tradeoff Perspective
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Helong Zhou
Liangchen Song
Jiajie Chen
Ye Zhou
Guoli Wang
Junsong Yuan
Qian Zhang
19
170
0
01 Feb 2021
Neural Attention Distillation: Erasing Backdoor Triggers from Deep
  Neural Networks
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Bo-wen Li
Xingjun Ma
AAML
FedML
13
428
0
15 Jan 2021
SEED: Self-supervised Distillation For Visual Representation
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
245
190
0
12 Jan 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
ISD: Self-Supervised Learning by Iterative Similarity Distillation
ISD: Self-Supervised Learning by Iterative Similarity Distillation
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Vipin Pillai
Paolo Favaro
Hamed Pirsiavash
SSL
27
44
0
16 Dec 2020
Wasserstein Contrastive Representation Distillation
Wasserstein Contrastive Representation Distillation
Liqun Chen
Dong Wang
Zhe Gan
Jingjing Liu
Ricardo Henao
Lawrence Carin
20
93
0
15 Dec 2020
Model Compression Using Optimal Transport
Model Compression Using Optimal Transport
Suhas Lohit
Michael J. Jones
26
8
0
07 Dec 2020
Cross-Layer Distillation with Semantic Calibration
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
45
287
0
06 Dec 2020
Going Beyond Classification Accuracy Metrics in Model Compression
Going Beyond Classification Accuracy Metrics in Model Compression
Vinu Joseph
Shoaib Ahmed Siddiqui
Aditya Bhaskara
Ganesh Gopalakrishnan
Saurav Muralidharan
M. Garland
Sheraz Ahmed
Andreas Dengel
45
17
0
03 Dec 2020
Regularization via Adaptive Pairwise Label Smoothing
Regularization via Adaptive Pairwise Label Smoothing
Hongyu Guo
26
0
0
02 Dec 2020
Multi-level Knowledge Distillation via Knowledge Alignment and
  Correlation
Multi-level Knowledge Distillation via Knowledge Alignment and Correlation
Fei Ding
Yin Yang
Hongxin Hu
V. Krovi
Feng Luo
22
4
0
01 Dec 2020
torchdistill: A Modular, Configuration-Driven Framework for Knowledge
  Distillation
torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation
Yoshitomo Matsubara
11
25
0
25 Nov 2020
On Self-Distilling Graph Neural Network
On Self-Distilling Graph Neural Network
Y. Chen
Yatao Bian
Xi Xiao
Yu Rong
Tingyang Xu
Junzhou Huang
FedML
19
48
0
04 Nov 2020
Attribution Preservation in Network Compression for Reliable Network
  Interpretation
Attribution Preservation in Network Compression for Reliable Network Interpretation
Geondo Park
J. Yang
Sung Ju Hwang
Eunho Yang
4
5
0
28 Oct 2020
CompRess: Self-Supervised Learning by Compressing Representations
CompRess: Self-Supervised Learning by Compressing Representations
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
SSL
23
89
0
28 Oct 2020
Reducing the Teacher-Student Gap via Spherical Knowledge Disitllation
Reducing the Teacher-Student Gap via Spherical Knowledge Disitllation
Jia Guo
Minghao Chen
Yao Hu
Chen Zhu
Xiaofei He
Deng Cai
23
6
0
15 Oct 2020
Locally Linear Region Knowledge Distillation
Locally Linear Region Knowledge Distillation
Xiang Deng
Zhongfei Zhang
Zhang
25
0
0
09 Oct 2020
Improved Knowledge Distillation via Full Kernel Matrix Transfer
Improved Knowledge Distillation via Full Kernel Matrix Transfer
Qi Qian
Hao Li
Juhua Hu
6
7
0
30 Sep 2020
Unsupervised Transfer Learning for Spatiotemporal Predictive Networks
Unsupervised Transfer Learning for Spatiotemporal Predictive Networks
Zhiyu Yao
Yunbo Wang
Mingsheng Long
Jianmin Wang
AI4TS
25
18
0
24 Sep 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
25
111
0
18 Sep 2020
S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric
  Learning
S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning
Karsten Roth
Timo Milbich
Bjorn Ommer
Joseph Paul Cohen
Marzyeh Ghassemi
FedML
28
17
0
17 Sep 2020
Collaborative Group Learning
Collaborative Group Learning
Shaoxiong Feng
Hongshen Chen
Xuancheng Ren
Zhuoye Ding
Kan Li
Xu Sun
10
7
0
16 Sep 2020
Noisy Self-Knowledge Distillation for Text Summarization
Noisy Self-Knowledge Distillation for Text Summarization
Yang Liu
S. Shen
Mirella Lapata
33
44
0
15 Sep 2020
SSKD: Self-Supervised Knowledge Distillation for Cross Domain Adaptive
  Person Re-Identification
SSKD: Self-Supervised Knowledge Distillation for Cross Domain Adaptive Person Re-Identification
Junhui Yin
Jiayan Qiu
Siqing Zhang
Zhanyu Ma
Jun Guo
16
5
0
13 Sep 2020
MED-TEX: Transferring and Explaining Knowledge with Less Data from
  Pretrained Medical Imaging Models
MED-TEX: Transferring and Explaining Knowledge with Less Data from Pretrained Medical Imaging Models
Thanh Nguyen-Duc
He Zhao
Jianfei Cai
Dinh Q. Phung
VLM
MedIm
25
4
0
06 Aug 2020
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
18
40
0
04 Aug 2020
Weakly Supervised 3D Object Detection from Point Clouds
Weakly Supervised 3D Object Detection from Point Clouds
Zengyi Qin
Jinglu Wang
Yan Lu
3DPC
77
62
0
28 Jul 2020
Dynamic Knowledge Distillation for Black-box Hypothesis Transfer
  Learning
Dynamic Knowledge Distillation for Black-box Hypothesis Transfer Learning
Yiqin Yu
Xu Min
Shiwan Zhao
Jing Mei
Fei Wang
Dongsheng Li
Kenney Ng
Shaochun Li
14
2
0
24 Jul 2020
Multi-label Contrastive Predictive Coding
Multi-label Contrastive Predictive Coding
Jiaming Song
Stefano Ermon
SSL
VLM
19
49
0
20 Jul 2020
Learning with Privileged Information for Efficient Image
  Super-Resolution
Learning with Privileged Information for Efficient Image Super-Resolution
Wonkyung Lee
Junghyup Lee
Dohyung Kim
Bumsub Ham
33
134
0
15 Jul 2020
Representation Transfer by Optimal Transport
Representation Transfer by Optimal Transport
Xuhong Li
Yves Grandvalet
Rémi Flamary
Nicolas Courty
Dejing Dou
OT
36
8
0
13 Jul 2020
Interactive Knowledge Distillation
Interactive Knowledge Distillation
Shipeng Fu
Zhen Li
Jun Xu
Ming-Ming Cheng
Gwanggil Jeon
Xiaomin Yang
6
6
0
03 Jul 2020
On the Demystification of Knowledge Distillation: A Residual Network
  Perspective
On the Demystification of Knowledge Distillation: A Residual Network Perspective
N. Jha
Rajat Saini
Sparsh Mittal
18
4
0
30 Jun 2020
CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information
Pengyu Cheng
Weituo Hao
Shuyang Dai
Jiachang Liu
Zhe Gan
Lawrence Carin
VLM
28
340
0
22 Jun 2020
Multi-fidelity Neural Architecture Search with Knowledge Distillation
Multi-fidelity Neural Architecture Search with Knowledge Distillation
I. Trofimov
Nikita Klyuchnikov
Mikhail Salnikov
Alexander N. Filippov
Evgeny Burnaev
32
15
0
15 Jun 2020
Ensemble Distillation for Robust Model Fusion in Federated Learning
Ensemble Distillation for Robust Model Fusion in Federated Learning
Tao R. Lin
Lingjing Kong
Sebastian U. Stich
Martin Jaggi
FedML
19
1,015
0
12 Jun 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
37
280
0
12 Jun 2020
Mutual Information Based Knowledge Transfer Under State-Action Dimension
  Mismatch
Mutual Information Based Knowledge Transfer Under State-Action Dimension Mismatch
Michael Wan
Tanmay Gangwani
Jian-wei Peng
20
19
0
12 Jun 2020
Adjoined Networks: A Training Paradigm with Applications to Network
  Compression
Adjoined Networks: A Training Paradigm with Applications to Network Compression
Utkarsh Nath
Shrinu Kushagra
Yingzhen Yang
24
2
0
10 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
Previous
1234567
Next