ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.02727
  4. Cited By
Understanding Knowledge Distillation in Non-autoregressive Machine
  Translation

Understanding Knowledge Distillation in Non-autoregressive Machine Translation

7 November 2019
Chunting Zhou
Graham Neubig
Jiatao Gu
ArXivPDFHTML

Papers citing "Understanding Knowledge Distillation in Non-autoregressive Machine Translation"

5 / 55 papers shown
Title
Non-Autoregressive Machine Translation with Latent Alignments
Non-Autoregressive Machine Translation with Latent Alignments
Chitwan Saharia
William Chan
Saurabh Saxena
Mohammad Norouzi
19
157
0
16 Apr 2020
Exploring Versatile Generative Language Model Via Parameter-Efficient
  Transfer Learning
Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning
Zhaojiang Lin
Andrea Madotto
Pascale Fung
34
155
0
08 Apr 2020
Aligned Cross Entropy for Non-Autoregressive Machine Translation
Aligned Cross Entropy for Non-Autoregressive Machine Translation
Marjan Ghazvininejad
Vladimir Karpukhin
Luke Zettlemoyer
Omer Levy
30
115
0
03 Apr 2020
LAVA NAT: A Non-Autoregressive Translation Model with Look-Around
  Decoding and Vocabulary Attention
LAVA NAT: A Non-Autoregressive Translation Model with Look-Around Decoding and Vocabulary Attention
Xiaoya Li
Yuxian Meng
Arianna Yuan
Fei Wu
Jiwei Li
40
12
0
08 Feb 2020
Semi-Autoregressive Training Improves Mask-Predict Decoding
Semi-Autoregressive Training Improves Mask-Predict Decoding
Marjan Ghazvininejad
Omer Levy
Luke Zettlemoyer
33
71
0
23 Jan 2020
Previous
12