Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.03334
Cited By
Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
6 December 2019
Mitchell A. Gordon
Kevin Duh
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation"
10 / 10 papers shown
Title
Scaling Low-Resource MT via Synthetic Data Generation with LLMs
Ona de Gibert
Joseph Attieh
Teemu Vahtola
Mikko Aulamo
Zihao Li
Raúl Vázquez
Tiancheng Hu
Jörg Tiedemann
SyDa
47
0
0
20 May 2025
Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages
Yuan Zhang
Yile Wang
Zijun Liu
Shuo Wang
Xiaolong Wang
Peng Li
Maosong Sun
Yang Liu
LRM
59
12
0
19 Feb 2024
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
34
17
0
08 Aug 2023
Target-Side Augmentation for Document-Level Machine Translation
Guangsheng Bao
Zhiyang Teng
Yue Zhang
49
10
0
08 May 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
36
35
0
28 Oct 2022
A baseline revisited: Pushing the limits of multi-segment models for context-aware translation
Suvodeep Majumde
Stanislas Lauly
Maria Nadejde
Marcello Federico
Georgiana Dinu
48
13
0
19 Oct 2022
Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased Scene Graph Generation
Xingning Dong
Tian Gan
Xuemeng Song
Jianlong Wu
Yuan Cheng
Liqiang Nie
41
92
0
18 Mar 2022
ESPnet-ST IWSLT 2021 Offline Speech Translation System
Hirofumi Inaguma
Shun Kiyono
Nelson Enrique Yalta Soplin
Pengcheng Guo
Jun Suzuki
Kevin Duh
Shinji Watanabe
3DV
45
2
0
01 Jul 2021
Selective Knowledge Distillation for Neural Machine Translation
Fusheng Wang
Jianhao Yan
Fandong Meng
Jie Zhou
24
58
0
27 May 2021
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
33
2,876
0
09 Jun 2020
1