ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.05835
  4. Cited By
Variational Information Distillation for Knowledge Transfer

Variational Information Distillation for Knowledge Transfer

11 April 2019
Sungsoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
ArXivPDFHTML

Papers citing "Variational Information Distillation for Knowledge Transfer"

50 / 321 papers shown
Title
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
Seonghak Kim
9
0
0
17 May 2025
InfoPO: On Mutual Information Maximization for Large Language Model Alignment
InfoPO: On Mutual Information Maximization for Large Language Model Alignment
Teng Xiao
Zhen Ge
Sujay Sanghavi
Tian Wang
Julian Katz-Samuels
Marc Versage
Qingjun Cui
Trishul Chilimbi
31
0
0
13 May 2025
Collaborative Multi-LoRA Experts with Achievement-based Multi-Tasks Loss for Unified Multimodal Information Extraction
Collaborative Multi-LoRA Experts with Achievement-based Multi-Tasks Loss for Unified Multimodal Information Extraction
Li Yuan
Yi Cai
Xudong Shen
Qing Li
Qingbao Huang
Zikun Deng
Tao Wang
MoMe
OffRL
MoE
46
0
0
08 May 2025
Precision Neural Network Quantization via Learnable Adaptive Modules
Precision Neural Network Quantization via Learnable Adaptive Modules
Wenqiang Zhou
Zhendong Yu
Xianglong Liu
Jiaming Yang
Rong Xiao
Tao Wang
Chenwei Tang
Jiancheng Lv
MQ
51
0
0
24 Apr 2025
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Anshumann
Mohd Abbas Zaidi
Akhil Kedia
Jinwoo Ahn
Taehwak Kwon
Kangwook Lee
Haejun Lee
Joohyung Lee
FedML
194
1
0
21 Mar 2025
Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding
Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding
Jinlong Li
Cristiano Saltori
Fabio Poiesi
N. Sebe
207
0
0
20 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
76
0
0
28 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
38
0
0
10 Feb 2025
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Cuipeng Wang
Tieyuan Chen
Haipeng Wang
54
0
0
09 Feb 2025
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
80
0
0
28 Jan 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
43
0
0
06 Jan 2025
Cross-View Consistency Regularisation for Knowledge Distillation
Cross-View Consistency Regularisation for Knowledge Distillation
W. Zhang
Dongnan Liu
Weidong Cai
Chao Ma
73
1
0
21 Dec 2024
On Distilling the Displacement Knowledge for Few-Shot Class-Incremental
  Learning
On Distilling the Displacement Knowledge for Few-Shot Class-Incremental Learning
Pengfei Fang
Yongchun Qin
H. Xue
CLL
81
0
0
15 Dec 2024
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge
  Distillation
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation
Jiaming Lv
Haoyuan Yang
P. Li
79
1
0
11 Dec 2024
Quantifying Knowledge Distillation Using Partial Information Decomposition
Quantifying Knowledge Distillation Using Partial Information Decomposition
Pasan Dissanayake
Faisal Hamman
Barproda Halder
Ilia Sucholutsky
Qiuyi Zhang
Sanghamitra Dutta
36
0
0
12 Nov 2024
Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical
  Representation Learning
Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning
Mingxing Li
Dingkang Yang
Y. Liu
Shunli Wang
Jiawei Chen
...
Xiaolu Hou
Mingyang Sun
Ziyun Qian
Dongliang Kou
Li Zhang
37
1
0
05 Nov 2024
Decoupling Dark Knowledge via Block-wise Logit Distillation for
  Feature-level Alignment
Decoupling Dark Knowledge via Block-wise Logit Distillation for Feature-level Alignment
Chengting Yu
Fengzhao Zhang
Ruizhe Chen
Zuozhu Liu
Shurun Tan
Er-ping Li
Aili Wang
41
2
0
03 Nov 2024
AttriPrompter: Auto-Prompting with Attribute Semantics for Zero-shot
  Nuclei Detection via Visual-Language Pre-trained Models
AttriPrompter: Auto-Prompting with Attribute Semantics for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models
Yongjian Wu
Yang Zhou
Jiya Saiyin
Bingzheng Wei
M. Lai
Jianzhong Shou
Yan Xu
VLM
MedIm
27
1
0
22 Oct 2024
Pre-training Distillation for Large Language Models: A Design Space
  Exploration
Pre-training Distillation for Large Language Models: A Design Space Exploration
Hao Peng
Xin Lv
Yushi Bai
Zijun Yao
J. Zhang
Lei Hou
Juanzi Li
36
4
0
21 Oct 2024
Preview-based Category Contrastive Learning for Knowledge Distillation
Preview-based Category Contrastive Learning for Knowledge Distillation
Muhe Ding
Jianlong Wu
Xue Dong
Xiaojie Li
Pengda Qin
Tian Gan
Liqiang Nie
VLM
39
0
0
18 Oct 2024
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu
Shengcao Cao
Yu-xiong Wang
55
1
0
18 Oct 2024
Distilling Invariant Representations with Dual Augmentation
Distilling Invariant Representations with Dual Augmentation
Nikolaos Giakoumoglou
Tania Stathaki
28
0
0
12 Oct 2024
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large
  Vision-Language Models
EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models
Rui Zhao
Hangjie Yuan
Yujie Wei
Shiwei Zhang
Yuchao Gu
...
Xiang Wang
Zhangjie Wu
Junhao Zhang
Yingya Zhang
Mike Zheng Shou
DiffM
VLM
55
4
0
09 Oct 2024
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
Mike Ranzinger
Jon Barker
Greg Heinrich
Pavlo Molchanov
Bryan Catanzaro
Andrew Tao
45
5
0
02 Oct 2024
Linear Projections of Teacher Embeddings for Few-Class Distillation
Linear Projections of Teacher Embeddings for Few-Class Distillation
Noel Loo
Fotis Iliopoulos
Wei Hu
Erik Vee
30
0
0
30 Sep 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
43
0
0
30 Sep 2024
Simple Unsupervised Knowledge Distillation With Space Similarity
Simple Unsupervised Knowledge Distillation With Space Similarity
Aditya Singh
Haohan Wang
31
1
0
20 Sep 2024
Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning
Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning
Amin Karimi Monsefi
Mengxi Zhou
Nastaran Karimi Monsefi
Ser-Nam Lim
Wei-Lun Chao
R. Ramnath
48
1
0
16 Sep 2024
Low-Resolution Object Recognition with Cross-Resolution Relational
  Contrastive Distillation
Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
Kangkai Zhang
Shiming Ge
Ruixin Shi
Dan Zeng
61
13
0
04 Sep 2024
UNIC: Universal Classification Models via Multi-teacher Distillation
UNIC: Universal Classification Models via Multi-teacher Distillation
Mert Bulent Sariyildiz
Philippe Weinzaepfel
Thomas Lucas
Diane Larlus
Yannis Kalantidis
37
6
0
09 Aug 2024
Learning at a Glance: Towards Interpretable Data-limited Continual
  Semantic Segmentation via Semantic-Invariance Modelling
Learning at a Glance: Towards Interpretable Data-limited Continual Semantic Segmentation via Semantic-Invariance Modelling
Bo Yuan
Danpei Zhao
Z. Shi
VLM
CLL
32
3
0
22 Jul 2024
Relational Representation Distillation
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
40
0
0
16 Jul 2024
Reprogramming Distillation for Medical Foundation Models
Reprogramming Distillation for Medical Foundation Models
Yuhang Zhou
Siyuan Du
Haolin Li
Jiangchao Yao
Ya Zhang
Yanfeng Wang
49
2
0
09 Jul 2024
Leveraging Topological Guidance for Improved Knowledge Distillation
Leveraging Topological Guidance for Improved Knowledge Distillation
Eun Som Jeon
Rahul Khurana
Aishani Pathak
Pavan Turaga
49
0
0
07 Jul 2024
Understanding the Gains from Repeated Self-Distillation
Understanding the Gains from Repeated Self-Distillation
Divyansh Pareek
Simon S. Du
Sewoong Oh
37
3
0
05 Jul 2024
Instance Temperature Knowledge Distillation
Instance Temperature Knowledge Distillation
Zhengbo Zhang
Yuxi Zhou
Jia Gong
Jun Liu
Zhigang Tu
41
2
0
27 Jun 2024
Make Graph Neural Networks Great Again: A Generic Integration Paradigm
  of Topology-Free Patterns for Traffic Speed Prediction
Make Graph Neural Networks Great Again: A Generic Integration Paradigm of Topology-Free Patterns for Traffic Speed Prediction
Yicheng Zhou
Pengfei Wang
Hao Dong
Denghui Zhang
Dingqi Yang
Yanjie Fu
Pengyang Wang
AI4TS
AI4CE
GNN
23
5
0
24 Jun 2024
NaviSplit: Dynamic Multi-Branch Split DNNs for Efficient Distributed
  Autonomous Navigation
NaviSplit: Dynamic Multi-Branch Split DNNs for Efficient Distributed Autonomous Navigation
Timothy K Johnsen
Ian Harshbarger
Zixia Xia
Marco Levorato
30
1
0
18 Jun 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
21
0
0
12 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
50
2
0
12 Jun 2024
Transferring Knowledge from Large Foundation Models to Small Downstream
  Models
Transferring Knowledge from Large Foundation Models to Small Downstream Models
Shikai Qiu
Boran Han
Danielle C. Maddix
Shuai Zhang
Yuyang Wang
Andrew Gordon Wilson
38
1
0
11 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
93
1
0
06 Jun 2024
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Erik Landolsi
Fredrik Kahl
DiffM
58
1
0
05 Jun 2024
Estimating Human Poses Across Datasets: A Unified Skeleton and
  Multi-Teacher Distillation Approach
Estimating Human Poses Across Datasets: A Unified Skeleton and Multi-Teacher Distillation Approach
Muhammad Gul Zain Ali Khan
Dhavalkumar Limbachiya
Didier Stricker
Muhammad Zeshan Afzal
3DH
40
0
0
30 May 2024
Estimating Depth of Monocular Panoramic Image with Teacher-Student Model
  Fusing Equirectangular and Spherical Representations
Estimating Depth of Monocular Panoramic Image with Teacher-Student Model Fusing Equirectangular and Spherical Representations
Jingguo Liu
Yijun Xu
Shigang Li
Jianfeng Li
MDE
45
3
0
27 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Min-man Wu
Xiaoli Li
33
1
0
09 May 2024
Low-Rank Knowledge Decomposition for Medical Foundation Models
Low-Rank Knowledge Decomposition for Medical Foundation Models
Yuhang Zhou
Haolin Li
Siyuan Du
Jiangchao Yao
Ya Zhang
Yanfeng Wang
28
3
0
26 Apr 2024
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment
  Analysis with Incomplete Modalities
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities
Mingcheng Li
Dingkang Yang
Xiao Zhao
Shuai Wang
Yan Wang
Kun Yang
Mingyang Sun
Dongliang Kou
Ziyun Qian
Lihua Zhang
46
8
0
25 Apr 2024
CNN2GNN: How to Bridge CNN with GNN
CNN2GNN: How to Bridge CNN with GNN
Ziheng Jiao
Hongyuan Zhang
Xuelong Li
23
1
0
23 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
1234567
Next