Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.10793
Cited By
v1
v2 (latest)
Knowledge Distillation via the Target-aware Transformer
22 May 2022
Sihao Lin
Hongwei Xie
Bing Wang
Kaicheng Yu
Xiaojun Chang
Xiaodan Liang
G. Wang
ViT
Re-assign community
ArXiv (abs)
PDF
HTML
Github (25★)
Papers citing
"Knowledge Distillation via the Target-aware Transformer"
7 / 57 papers shown
Title
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
54
3
0
25 Oct 2022
ViTKD: Practical Guidelines for ViT feature knowledge distillation
Zhendong Yang
Zhe Li
Ailing Zeng
Zexian Li
Chun Yuan
Yu Li
142
42
0
06 Sep 2022
In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation
Bolin Lai
Miao Liu
Fiona Ryan
James M. Rehg
ViT
97
37
0
08 Aug 2022
Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation
Mingjie Li
Wenjia Cai
Karin Verspoor
Shirui Pan
Xiaodan Liang
Xiaojun Chang
MedIm
88
38
0
04 Jun 2022
TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation
R. Liu
Kailun Yang
Alina Roitberg
Jiaming Zhang
Kunyu Peng
Huayao Liu
Yaonan Wang
Rainer Stiefelhagen
ViT
91
38
0
27 Feb 2022
Knowledge Distillation with Deep Supervision
Shiya Luo
Defang Chen
Can Wang
65
2
0
16 Feb 2022
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
207
1,057
0
23 Oct 2019
Previous
1
2