ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.09651
  4. Cited By
Tailoring Instructions to Student's Learning Levels Boosts Knowledge
  Distillation

Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation

16 May 2023
Yuxin Ren
Zi-Qi Zhong
Xingjian Shi
Yi Zhu
Chun Yuan
Mu Li
ArXivPDFHTML

Papers citing "Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation"

12 / 12 papers shown
Title
Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models
Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models
Gyeongman Kim
Gyouk Chu
Eunho Yang
MoE
59
0
0
18 Feb 2025
Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning
  Small Language Models
Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Y. Fu
Yin Yu
Xiaotian Han
Runchao Li
Xianxuan Long
Haotian Yu
Pan Li
SyDa
67
0
0
25 Nov 2024
Over-parameterized Student Model via Tensor Decomposition Boosted
  Knowledge Distillation
Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation
Yu-Liang Zhan
Zhong-Yi Lu
Hao Sun
Ze-Feng Gao
35
0
0
10 Nov 2024
Reliable Model Watermarking: Defending Against Theft without
  Compromising on Evasion
Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion
Markus Frey
Sichu Liang
Wentao Hu
Matthias Nau
Ju Jia
Shilin Wang
AAML
36
3
0
21 Apr 2024
PromptKD: Distilling Student-Friendly Knowledge for Generative Language
  Models via Prompt Tuning
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning
Gyeongman Kim
Doohyuk Jang
Eunho Yang
VLM
40
13
0
20 Feb 2024
Democratizing Reasoning Ability: Tailored Learning from Large Language
  Model
Democratizing Reasoning Ability: Tailored Learning from Large Language Model
Zhaoyang Wang
Shaohan Huang
Yuxuan Liu
Jiahai Wang
Minghui Song
...
Haizhen Huang
Furu Wei
Weiwei Deng
Feng Sun
Qi Zhang
LRM
35
11
0
20 Oct 2023
Learning Student-Friendly Teacher Networks for Knowledge Distillation
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
121
100
0
12 Feb 2021
Curriculum Learning: A Survey
Curriculum Learning: A Survey
Petru Soviany
Radu Tudor Ionescu
Paolo Rota
N. Sebe
ODL
76
342
0
25 Jan 2021
I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
102
341
0
05 Jan 2021
Meta Pseudo Labels
Meta Pseudo Labels
Hieu H. Pham
Zihang Dai
Qizhe Xie
Minh-Thang Luong
Quoc V. Le
VLM
262
656
0
23 Mar 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
195
473
0
12 Jun 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1