ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.03236
  4. Cited By
Cross-Layer Distillation with Semantic Calibration

Cross-Layer Distillation with Semantic Calibration

6 December 2020
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
    FedML
ArXivPDFHTML

Papers citing "Cross-Layer Distillation with Semantic Calibration"

50 / 118 papers shown
Title
Improving Knowledge Distillation with Teacher's Explanation
Improving Knowledge Distillation with Teacher's Explanation
S. Chowdhury
Ben Liang
A. Tizghadam
Ilijc Albanese
FAtt
11
0
0
04 Oct 2023
Generalizable Heterogeneous Federated Cross-Correlation and Instance
  Similarity Learning
Generalizable Heterogeneous Federated Cross-Correlation and Instance Similarity Learning
Wenke Huang
J. J. Valero-Mas
Dasaem Jeong
Bo Du
FedML
32
44
0
28 Sep 2023
Heterogeneous Generative Knowledge Distillation with Masked Image
  Modeling
Heterogeneous Generative Knowledge Distillation with Masked Image Modeling
Ziming Wang
Shumin Han
Xiaodi Wang
Jing Hao
Xianbin Cao
Baochang Zhang
VLM
32
0
0
18 Sep 2023
DMKD: Improving Feature-based Knowledge Distillation for Object
  Detection Via Dual Masking Augmentation
DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation
Guangqi Yang
Yin Tang
Zhijian Wu
Jun Yu Li
Jianhua Xu
Xili Wan
16
3
0
06 Sep 2023
Representation Disparity-aware Distillation for 3D Object Detection
Representation Disparity-aware Distillation for 3D Object Detection
Yanjing Li
Sheng Xu
Mingbao Lin
Jihao Yin
Baochang Zhang
Xianbin Cao
22
2
0
20 Aug 2023
MixBCT: Towards Self-Adapting Backward-Compatible Training
MixBCT: Towards Self-Adapting Backward-Compatible Training
Yuefeng Liang
Yufeng Zhang
Shiliang Zhang
Yaowei Wang
Shengze Xiao
KenLi Li
Xiaoyu Wang
19
1
0
14 Aug 2023
Customizing Synthetic Data for Data-Free Student Learning
Customizing Synthetic Data for Data-Free Student Learning
Shiya Luo
Defang Chen
Can Wang
12
2
0
10 Jul 2023
KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Weijie Xu
Xiaoyu Jiang
Jay Desai
Bin Han
Fuqin Yan
Francis Iannacci
BDL
26
3
0
04 Jul 2023
Common Knowledge Learning for Generating Transferable Adversarial
  Examples
Common Knowledge Learning for Generating Transferable Adversarial Examples
Rui Yang
Yuanfang Guo
Junfu Wang
Jiantao Zhou
Yun-an Wang
AAML
13
0
0
01 Jul 2023
A Dimensional Structure based Knowledge Distillation Method for
  Cross-Modal Learning
A Dimensional Structure based Knowledge Distillation Method for Cross-Modal Learning
Lingyu Si
Hongwei Dong
Wenwen Qiang
J. Yu
Wen-jie Zhai
Changwen Zheng
Fanjiang Xu
Fuchun Sun
19
1
0
28 Jun 2023
Cross Architecture Distillation for Face Recognition
Cross Architecture Distillation for Face Recognition
Weisong Zhao
Xiangyu Zhu
Zhixiang He
Xiaoyu Zhang
Zhen Lei
CVBM
17
6
0
26 Jun 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
86
22
0
19 Jun 2023
Coaching a Teachable Student
Coaching a Teachable Student
Jimuyang Zhang
Zanming Huang
Eshed Ohn-Bar
57
21
0
16 Jun 2023
Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Hailin Zhang
Defang Chen
Can Wang
12
12
0
11 Jun 2023
ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep
  Learning-Based Noise Suppression
ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression
Yixin Wan
Yuan-yuan Zhou
Xiulian Peng
Kai-Wei Chang
Yan Lu
33
3
0
26 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
47
68
0
23 May 2023
Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
  Recognition
Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual Recognition
Haiqi Liu
C. L. P. Chen
Xinrong Gong
Tong Zhang
27
9
0
12 May 2023
Visual Tuning
Visual Tuning
Bruce X. B. Yu
Jianlong Chang
Haixin Wang
Lin Liu
Shijie Wang
...
Lingxi Xie
Haojie Li
Zhouchen Lin
Qi Tian
Chang Wen Chen
VLM
46
38
0
10 May 2023
Leveraging Synthetic Targets for Machine Translation
Leveraging Synthetic Targets for Machine Translation
Sarthak Mittal
Oleksii Hrinchuk
Oleksii Kuchaiev
26
2
0
07 May 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
18
1
0
19 Apr 2023
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual
  Representation Learning
Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning
Kaiyou Song
Jin Xie
Shanyi Zhang
Zimeng Luo
27
29
0
13 Apr 2023
CAMEL: Communicative Agents for "Mind" Exploration of Large Language
  Model Society
CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
G. Li
Hasan Hammoud
Hani Itani
Dmitrii Khizbullin
Bernard Ghanem
SyDa
ALM
24
400
0
31 Mar 2023
DAMO-StreamNet: Optimizing Streaming Perception in Autonomous Driving
DAMO-StreamNet: Optimizing Streaming Perception in Autonomous Driving
Ju He
Zhi-Qi Cheng
Chenyang Li
Wangmeng Xiang
Binghui Chen
Bin Luo
Yifeng Geng
Xuansong Xie
AI4CE
18
20
0
30 Mar 2023
Distill n' Explain: explaining graph neural networks using simple
  surrogates
Distill n' Explain: explaining graph neural networks using simple surrogates
Tamara A. Pereira
Erik Nasciment
Lucas Resck
Diego Mesquita
Amauri Souza
24
15
0
17 Mar 2023
Continuous sign language recognition based on cross-resolution knowledge
  distillation
Continuous sign language recognition based on cross-resolution knowledge distillation
Qidan Zhu
Jing Li
Fei Yuan
Quan Gan
SLR
33
4
0
13 Mar 2023
Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous
  Federated Learning
Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning
Xiucheng Wang
Nan Cheng
Longfei Ma
Ruijin Sun
Rong Chai
Ning Lu
FedML
33
11
0
10 Mar 2023
Audio Representation Learning by Distilling Video as Privileged
  Information
Audio Representation Learning by Distilling Video as Privileged Information
Amirhossein Hajavi
Ali Etemad
13
4
0
06 Feb 2023
Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?
Knowledge Distillation ≈\approx≈ Label Smoothing: Fact or Fallacy?
Md Arafat Sultan
22
2
0
30 Jan 2023
Prototype-guided Cross-task Knowledge Distillation for Large-scale
  Models
Prototype-guided Cross-task Knowledge Distillation for Large-scale Models
Deng Li
Aming Wu
Yahong Han
Qingwen Tian
VLM
16
2
0
26 Dec 2022
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images
Yuan Yao
Yuanhan Zhang
Zhen-fei Yin
Jiebo Luo
Wanli Ouyang
Xiaoshui Huang
3DPC
29
10
0
17 Dec 2022
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with
  Similarity-Keeping Knowledge Distillation
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with Similarity-Keeping Knowledge Distillation
Xin Huang
Sung-Yu Chen
Chun-Shu Wei
14
0
0
06 Dec 2022
Leveraging Different Learning Styles for Improved Knowledge Distillation
  in Biomedical Imaging
Leveraging Different Learning Styles for Improved Knowledge Distillation in Biomedical Imaging
Usma Niyaz
A. Sambyal
Deepti R. Bathula
14
0
0
06 Dec 2022
Class-aware Information for Logit-based Knowledge Distillation
Class-aware Information for Logit-based Knowledge Distillation
Shuoxi Zhang
Hanpeng Liu
J. Hopcroft
Kun He
25
2
0
27 Nov 2022
Accelerating Diffusion Sampling with Classifier-based Feature
  Distillation
Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Wujie Sun
Defang Chen
Can Wang
Deshi Ye
Yan Feng
Chun-Yen Chen
35
16
0
22 Nov 2022
Improved Feature Distillation via Projector Ensemble
Improved Feature Distillation via Projector Ensemble
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Zi Huang
29
37
0
27 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with
  Deep Supervision
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
14
3
0
25 Oct 2022
Linkless Link Prediction via Relational Distillation
Linkless Link Prediction via Relational Distillation
Zhichun Guo
William Shiao
Shichang Zhang
Yozen Liu
Nitesh V. Chawla
Neil Shah
Tong Zhao
21
41
0
11 Oct 2022
MLink: Linking Black-Box Models from Multiple Domains for Collaborative
  Inference
MLink: Linking Black-Box Models from Multiple Domains for Collaborative Inference
Mu Yuan
Lan Zhang
Zimu Zheng
Yi-Nan Zhang
Xiang-Yang Li
19
2
0
28 Sep 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
33
50
0
23 Jul 2022
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Yichen Liu
C. Wang
Defang Chen
Zhehui Zhou
Yan Feng
Chun-Yen Chen
11
0
0
07 Jun 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
37
27
0
24 May 2022
Knowledge Distillation via the Target-aware Transformer
Knowledge Distillation via the Target-aware Transformer
Sihao Lin
Hongwei Xie
Bing Wang
Kaicheng Yu
Xiaojun Chang
Xiaodan Liang
G. Wang
ViT
20
104
0
22 May 2022
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural
  Networks
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural Networks
Jiongyu Guo
Defang Chen
Can Wang
FedML
13
4
0
05 May 2022
Attention-based Knowledge Distillation in Multi-attention Tasks: The
  Impact of a DCT-driven Loss
Attention-based Knowledge Distillation in Multi-attention Tasks: The Impact of a DCT-driven Loss
Alejandro López-Cifuentes
Marcos Escudero-Viñolo
Jesús Bescós
Juan C. Sanmiguel
15
1
0
04 May 2022
LRH-Net: A Multi-Level Knowledge Distillation Approach for Low-Resource
  Heart Network
LRH-Net: A Multi-Level Knowledge Distillation Approach for Low-Resource Heart Network
Ekansh Chauhan
Swathi Guptha
Likith Reddy
R. Bapi
6
0
0
11 Apr 2022
Enabling All In-Edge Deep Learning: A Literature Review
Enabling All In-Edge Deep Learning: A Literature Review
Praveen Joshi
Mohammed Hasanuzzaman
Chandra Thapa
Haithem Afli
T. Scully
23
22
0
07 Apr 2022
Knowledge Distillation with the Reused Teacher Classifier
Knowledge Distillation with the Reused Teacher Classifier
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
25
165
0
26 Mar 2022
Learning Affordance Grounding from Exocentric Images
Learning Affordance Grounding from Exocentric Images
Hongcheng Luo
Wei Zhai
Jing Zhang
Yang Cao
Dacheng Tao
16
60
0
18 Mar 2022
Knowledge Distillation as Efficient Pre-training: Faster Convergence,
  Higher Data-efficiency, and Better Transferability
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He
Shuyang Sun
Jihan Yang
Song Bai
Xiaojuan Qi
24
36
0
10 Mar 2022
Previous
123
Next