ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.08825
  4. Cited By
Densely Guided Knowledge Distillation using Multiple Teacher Assistants

Densely Guided Knowledge Distillation using Multiple Teacher Assistants

18 September 2020
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
ArXivPDFHTML

Papers citing "Densely Guided Knowledge Distillation using Multiple Teacher Assistants"

18 / 18 papers shown
Title
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
76
0
0
28 Feb 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
47
0
0
13 Jan 2025
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
41
0
0
30 Sep 2024
MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU
MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU
Yan Li
So-Eon Kim
Seong-Bae Park
S. Han
25
0
0
15 Aug 2024
Direct Preference Knowledge Distillation for Large Language Models
Direct Preference Knowledge Distillation for Large Language Models
Yixing Li
Yuxian Gu
Li Dong
Dequan Wang
Yu Cheng
Furu Wei
45
6
0
28 Jun 2024
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Accurate Retraining-free Pruning for Pretrained Encoder-based Language
  Models
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models
Seungcheol Park
Ho-Jin Choi
U. Kang
VLM
40
5
0
07 Aug 2023
GKD: A General Knowledge Distillation Framework for Large-scale
  Pre-trained Language Model
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Shicheng Tan
Weng Lam Tam
Yuanchun Wang
Wenwen Gong
Yang Yang
...
Jiahao Liu
Jingang Wang
Shuo Zhao
Peng-Zhen Zhang
Jie Tang
ALM
MoE
33
11
0
11 Jun 2023
Knowledge Diffusion for Distillation
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
37
50
0
25 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
20
17
0
18 May 2023
Improved Feature Distillation via Projector Ensemble
Improved Feature Distillation via Projector Ensemble
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Zi Huang
37
37
0
27 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Label driven Knowledge Distillation for Federated Learning with non-IID
  Data
Label driven Knowledge Distillation for Federated Learning with non-IID Data
Minh-Duong Nguyen
Viet Quoc Pham
D. Hoang
Long Tran-Thanh
Diep N. Nguyen
W. Hwang
24
2
0
29 Sep 2022
Knowledge Distillation from A Stronger Teacher
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
22
237
0
21 May 2022
Localization Distillation for Object Detection
Localization Distillation for Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
Jun Wang
W. Zuo
Ming-Ming Cheng
27
64
0
12 Apr 2022
Localization Distillation for Dense Object Detection
Localization Distillation for Dense Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
W. Zuo
Qibin Hou
Ming-Ming Cheng
ObjD
101
115
0
24 Feb 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
200
473
0
12 Jun 2018
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image
  Segmentation
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
Vijay Badrinarayanan
Alex Kendall
R. Cipolla
SSeg
446
15,645
0
02 Nov 2015
1