ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.06538
  4. Cited By
Enhancing Low-Resource NMT with a Multilingual Encoder and Knowledge
  Distillation: A Case Study

Enhancing Low-Resource NMT with a Multilingual Encoder and Knowledge Distillation: A Case Study

9 July 2024
Aniruddha Roy
Pretam Ray
Ayush Maheshwari
Sudeshna Sarkar
Pawan Goyal
ArXivPDFHTML

Papers citing "Enhancing Low-Resource NMT with a Multilingual Encoder and Knowledge Distillation: A Case Study"

9 / 9 papers shown
Title
Survey of Low-Resource Machine Translation
Survey of Low-Resource Machine Translation
Barry Haddow
Rachel Bawden
Antonio Valerio Miceli Barone
Jindvrich Helcl
Alexandra Birch
AIMat
61
157
0
01 Sep 2021
On the Copying Behaviors of Pre-Training for Neural Machine Translation
On the Copying Behaviors of Pre-Training for Neural Machine Translation
Xuebo Liu
Longyue Wang
Derek F. Wong
Liang Ding
Lidia S. Chao
Shuming Shi
Zhaopeng Tu
38
25
0
17 Jul 2021
Nearest Neighbor Machine Translation
Nearest Neighbor Machine Translation
Urvashi Khandelwal
Angela Fan
Dan Jurafsky
Luke Zettlemoyer
M. Lewis
RALM
53
283
0
01 Oct 2020
Language Model Prior for Low-Resource Neural Machine Translation
Language Model Prior for Low-Resource Neural Machine Translation
Christos Baziotis
Barry Haddow
Alexandra Birch
33
53
0
30 Apr 2020
Unsupervised Cross-lingual Representation Learning at Scale
Unsupervised Cross-lingual Representation Learning at Scale
Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Edouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
154
6,454
0
05 Nov 2019
Cross-Lingual Natural Language Generation via Pre-Training
Cross-Lingual Natural Language Generation via Pre-Training
Zewen Chi
Li Dong
Furu Wei
Wenhui Wang
Xian-Ling Mao
Heyan Huang
43
136
0
23 Sep 2019
MASS: Masked Sequence to Sequence Pre-training for Language Generation
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
92
962
0
07 May 2019
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
776
149,474
0
22 Dec 2014
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
381
27,205
0
01 Sep 2014
1