53
0

End-to-End Training of Both Translation Models in the Back-Translation Framework

Abstract

Semi-supervised learning algorithms in neural machine translation (NMT) have significantly improved translation quality compared to the supervised learning methods by using additional monolingual corpora. Among them, back-translation is a theoretically well-structured and cutting-edge method. Given two pre-trained NMT models between source and target languages, one NMT model translates a monolingual sentence to a latent sentence, and the other reconstructs the monolingual input sentence given the latent sentence. Based on this auto-encoding framework, previous work tried to apply the variational auto-encoder's (VAE) training framework to the back-translation. However, the discrete property of the latent sentence made it impossible to use backpropagation in the end-to-end fashion. In this paper, we propose a {\it categorical reparameterization trick} that makes NMT models generate {\it differentiable sentences}. Based on the proposed method, end-to-end learning is possible so that two NMT models for the back-translation can be trained as a unified model. In addition, we propose several regularization techniques that are especially advantageous to this framework. Our experiments demonstrate that our method can achieve better BLEU scores than the previous baseline, on the datasets of the WMT18 translation task.

View on arXiv
Comments on this paper