Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Monolingual data plays an important role in boosting fluency for phrase-based statistical machine translation, and we investigate the use of monolingual data for neural machine translation (NMT). In contrast to previous work, which integrates a separately trained RNN language model into an NMT architecture, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to include monolingual training data in the training process. Through our use of monolingual data, we obtain substantial improvements on the WMT 15 (+2.8--3.4 BLEU) task for English->German, and for the low-resourced IWSLT 14 task Turkish->English (+2.1--3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task for English->German.
View on arXiv