ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.01266
  4. Cited By
Meta-KD: A Meta Knowledge Distillation Framework for Language Model
  Compression across Domains

Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

2 December 2020
Haojie Pan
Chengyu Wang
Minghui Qiu
Yichang Zhang
Yaliang Li
Jun Huang
ArXivPDFHTML

Papers citing "Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains"

15 / 15 papers shown
Title
EvoP: Robust LLM Inference via Evolutionary Pruning
EvoP: Robust LLM Inference via Evolutionary Pruning
Shangyu Wu
Hongchao Du
Ying Xiong
Shuai Chen
Tei-Wei Kuo
Nan Guan
Chun Jason Xue
34
1
0
19 Feb 2025
A Hybrid Cross-Stage Coordination Pre-ranking Model for Online Recommendation Systems
A Hybrid Cross-Stage Coordination Pre-ranking Model for Online Recommendation Systems
Binglei Zhao
Houying Qi
Guang Xu
Mian Ma
Xiwei Zhao
Feng Mei
Sulong Xu
Jinghe Hu
57
0
0
17 Feb 2025
MoDeGPT: Modular Decomposition for Large Language Model Compression
MoDeGPT: Modular Decomposition for Large Language Model Compression
Chi-Heng Lin
Shangqian Gao
James Seale Smith
Abhishek Patel
Shikhar Tuli
Yilin Shen
Hongxia Jin
Yen-Chang Hsu
71
8
0
19 Aug 2024
MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU
MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU
Yan Li
So-Eon Kim
Seong-Bae Park
S. Han
25
0
0
15 Aug 2024
CLLMs: Consistency Large Language Models
CLLMs: Consistency Large Language Models
Siqi Kou
Lanxiang Hu
Zhe He
Zhijie Deng
Hao Zhang
49
28
0
28 Feb 2024
Position: Key Claims in LLM Research Have a Long Tail of Footnotes
Position: Key Claims in LLM Research Have a Long Tail of Footnotes
Anna Rogers
A. Luccioni
53
19
0
14 Aug 2023
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
29
7
0
18 Oct 2022
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language
  Processing
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
Chengyu Wang
Minghui Qiu
Chen Shi
Taolin Zhang
Tingting Liu
Lei Li
Jie Wang
Ming Wang
Jun Huang
W. Lin
27
21
0
30 Apr 2022
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
  Language Model Compression
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
Chenhe Dong
Yaliang Li
Ying Shen
Minghui Qiu
VLM
43
7
0
16 Oct 2021
Learning to Teach with Student Feedback
Learning to Teach with Student Feedback
Yitao Liu
Tianxiang Sun
Xipeng Qiu
Xuanjing Huang
VLM
23
6
0
10 Sep 2021
Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Lingyun Feng
Minghui Qiu
Yaliang Li
Haitao Zheng
Ying Shen
46
10
0
20 Jan 2021
EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform
  for NLP Applications
EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform for NLP Applications
Minghui Qiu
Peng Li
Chengyu Wang
Hanjie Pan
Yaliang Li
...
Jun Yang
Yaliang Li
Jun Huang
Deng Cai
Wei Lin
VLM
SyDa
39
20
0
18 Nov 2020
Probabilistic Model-Agnostic Meta-Learning
Probabilistic Model-Agnostic Meta-Learning
Chelsea Finn
Kelvin Xu
Sergey Levine
BDL
176
666
0
07 Jun 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
457
11,715
0
09 Mar 2017
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
273
1,275
0
06 Mar 2017
1