ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.01851
  4. Cited By
Self-Knowledge Distillation in Natural Language Processing

Self-Knowledge Distillation in Natural Language Processing

2 August 2019
Sangchul Hahn
Heeyoul Choi
ArXivPDFHTML

Papers citing "Self-Knowledge Distillation in Natural Language Processing"

11 / 61 papers shown
Title
Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning
  with Self-Knowledge Distillation
Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Md. Akmal Haidar
Chao Xing
Mehdi Rezagholizadeh
27
7
0
17 Mar 2021
Large-Scale Generative Data-Free Distillation
Large-Scale Generative Data-Free Distillation
Liangchen Luo
Mark Sandler
Zi Lin
A. Zhmoginov
Andrew G. Howard
24
43
0
10 Dec 2020
Towards Data Distillation for End-to-end Spoken Conversational Question
  Answering
Towards Data Distillation for End-to-end Spoken Conversational Question Answering
Chenyu You
Nuo Chen
Fenglin Liu
Dongchao Yang
Yuexian Zou
22
45
0
18 Oct 2020
Neighbourhood Distillation: On the benefits of non end-to-end
  distillation
Neighbourhood Distillation: On the benefits of non end-to-end distillation
Laetitia Shao
Max Moroz
Elad Eban
Yair Movshovitz-Attias
ODL
18
0
0
02 Oct 2020
Weight Distillation: Transferring the Knowledge in Neural Network
  Parameters
Weight Distillation: Transferring the Knowledge in Neural Network Parameters
Ye Lin
Yanyang Li
Ziyang Wang
Bei Li
Quan Du
Tong Xiao
Jingbo Zhu
6
23
0
19 Sep 2020
Noisy Self-Knowledge Distillation for Text Summarization
Noisy Self-Knowledge Distillation for Text Summarization
Yang Liu
S. Shen
Mirella Lapata
33
44
0
15 Sep 2020
Tackling the Unannotated: Scene Graph Generation with Bias-Reduced
  Models
Tackling the Unannotated: Scene Graph Generation with Bias-Reduced Models
Tong Wang
Selen Pehlivan
Jorma T. Laaksonen
29
34
0
18 Aug 2020
Self-Knowledge Distillation with Progressive Refinement of Targets
Self-Knowledge Distillation with Progressive Refinement of Targets
Kyungyul Kim
Byeongmoon Ji
Doyoung Yoon
Sangheum Hwang
ODL
24
177
0
22 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
28
2,857
0
09 Jun 2020
Knowledge Distillation for Multilingual Unsupervised Neural Machine
  Translation
Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation
Haipeng Sun
Rui Wang
Kehai Chen
Masao Utiyama
Eiichiro Sumita
Tiejun Zhao
AIMat
23
47
0
21 Apr 2020
A Strong Baseline for Learning Cross-Lingual Word Embeddings from
  Sentence Alignments
A Strong Baseline for Learning Cross-Lingual Word Embeddings from Sentence Alignments
Omer Levy
Anders Søgaard
Yoav Goldberg
39
66
0
18 Aug 2016
Previous
12