Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.05068
Cited By
v1
v2 (latest)
Relational Knowledge Distillation
10 April 2019
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Relational Knowledge Distillation"
50 / 59 papers shown
Title
KDH-MLTC: Knowledge Distillation for Healthcare Multi-Label Text Classification
Hajar Sakai
Sarah Lam
VLM
101
0
0
12 May 2025
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Sanjay Surendranath Girija
Shashank Kapoor
Lakshit Arora
Dipen Pradhan
Aman Raj
Ankit Shetgaonkar
128
0
0
05 May 2025
Distilling Stereo Networks for Performant and Efficient Leaner Networks
Rafia Rahim
Samuel Woerz
A. Zell
175
0
0
24 Mar 2025
Semantic-Supervised Spatial-Temporal Fusion for LiDAR-based 3D Object Detection
Chaoqun Wang
Xiaobin Hong
Wenzhong Li
Ruimao Zhang
3DPC
442
0
0
13 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
185
0
0
28 Feb 2025
I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation
Ayoub Karine
Thibault Napoléon
M. Jridi
VLM
275
0
0
24 Feb 2025
Leave No One Behind: Enhancing Diversity While Maintaining Accuracy in Social Recommendation
Lei Li
Xiao Zhou
80
0
0
17 Feb 2025
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer
Hu Hu
Sabato Marco Siniscalchi
Chao-Han Huck Yang
Chin-Hui Lee
114
0
0
28 Jan 2025
Exploring Feature-based Knowledge Distillation for Recommender System: A Frequency Perspective
Zhangchi Zhu
Wei Zhang
145
0
0
16 Nov 2024
Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an Auxiliary Head
Penghui Yang
Chen-Chen Zong
Sheng-Jun Huang
Lei Feng
Bo An
117
1
0
13 Nov 2024
Scale-Aware Recognition in Satellite Images under Resource Constraints
Shreelekha Revankar
Cheng Perng Phoo
Utkarsh Mall
Bharath Hariharan
Kavita Bala
97
0
0
31 Oct 2024
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models
Jahyun Koo
Yerin Hwang
Yongil Kim
Taegwan Kang
Hyunkyung Bae
Kyomin Jung
126
0
0
25 Oct 2024
MiniPLM: Knowledge Distillation for Pre-Training Language Models
Yuxian Gu
Hao Zhou
Fandong Meng
Jie Zhou
Minlie Huang
150
7
0
22 Oct 2024
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
173
2
0
03 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
85
0
0
30 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
114
10
0
05 Sep 2024
Collaborative Learning for Enhanced Unsupervised Domain Adaptation
Minhee Cho
Hyesong Choi
Hayeon Jo
Dongbo Min
146
1
0
04 Sep 2024
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
119
0
0
16 Jul 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
114
2
0
12 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
149
1
0
06 Jun 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
123
1
0
22 Apr 2024
TSCM: A Teacher-Student Model for Vision Place Recognition Using Cross-Metric Knowledge Distillation
Yehui Shen
Mingmin Liu
Huimin Lu
Xieyuanli Chen
80
1
0
02 Apr 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
98
1
0
13 Mar 2024
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
186
0
0
08 Mar 2024
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
212
4
0
24 Nov 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
100
4
0
23 Nov 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
129
1
0
03 Jul 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
152
73
0
10 May 2023
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
82
100
0
10 Mar 2020
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
168
1,054
0
23 Oct 2019
Learning to Navigate for Fine-grained Classification
Ze Yang
Tiange Luo
Dong Wang
Zhiqiang Hu
Jun Gao
Liwei Wang
50
447
0
02 Sep 2018
Born Again Neural Networks
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
80
1,034
0
12 May 2018
Label Refinery: Improving ImageNet Classification through Label Progression
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
62
190
0
07 May 2018
Attention-based Ensemble for Deep Metric Learning
Wonsik Kim
Bhavya Goyal
Kunal Chawla
Jungmin Lee
Keunjoo Kwon
FedML
86
227
0
02 Apr 2018
Model compression via distillation and quantization
A. Polino
Razvan Pascanu
Dan Alistarh
MQ
86
732
0
15 Feb 2018
Deep Metric Learning with BIER: Boosting Independent Embeddings Robustly
M. Opitz
Georg Waltner
Horst Possegger
Horst Bischof
FedML
OOD
90
166
0
15 Jan 2018
Data Distillation: Towards Omni-Supervised Learning
Ilija Radosavovic
Piotr Dollár
Ross B. Girshick
Georgia Gkioxari
Kaiming He
87
419
0
12 Dec 2017
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Asit K. Mishra
Debbie Marr
FedML
65
331
0
15 Nov 2017
Moonshine: Distilling with Cheap Convolutions
Elliot J. Crowley
Gavia Gray
Amos Storkey
59
121
0
07 Nov 2017
Data-Free Knowledge Distillation for Deep Neural Networks
Raphael Gontijo-Lopes
Stefano Fenu
Thad Starner
60
273
0
19 Oct 2017
Revisiting knowledge transfer for training object class detectors
J. Uijlings
S. Popov
V. Ferrari
VLM
ObjD
79
71
0
21 Aug 2017
Deep Metric Learning with Angular Loss
Jian Wang
Feng Zhou
Shilei Wen
Xiao-Chang Liu
Yuanqing Lin
DML
78
506
0
04 Aug 2017
DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer
Yuntao Chen
Naiyan Wang
Zhaoxiang Zhang
FedML
102
225
0
05 Jul 2017
Sampling Matters in Deep Embedding Learning
Chaoxia Wu
R. Manmatha
Alex Smola
Philipp Krahenbuhl
117
924
0
23 Jun 2017
Object-Part Attention Model for Fine-grained Image Classification
Yuxin Peng
Xiangteng He
Junjie Zhao
OCL
VLM
148
338
0
06 Apr 2017
Prototypical Networks for Few-shot Learning
Jake C. Snell
Kevin Swersky
R. Zemel
305
8,154
0
15 Mar 2017
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
147
2,586
0
12 Dec 2016
Deep Model Compression: Distilling Knowledge from Noisy Teachers
Bharat Bhusan Sau
V. Balasubramanian
68
181
0
30 Oct 2016
Matching Networks for One Shot Learning
Oriol Vinyals
Charles Blundell
Timothy Lillicrap
Koray Kavukcuoglu
Daan Wierstra
VLM
378
7,343
0
13 Jun 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,510
0
10 Dec 2015
1
2
Next