45
46

N-gram Language Modeling using Recurrent Neural Network Estimation

Abstract

We investigate the effective memory depth of RNN models by using them for nn-gram language model (LM) smoothing. Experiments on a small corpus (UPenn Treebank, one million words of training data and 10k vocabulary) have found the LSTM cell with dropout to be the best model for encoding the nn-gram state when compared with feed-forward and vanilla RNN models. When preserving the sentence independence assumption the LSTM nn-gram matches the LSTM LM performance for n=9n=9 and slightly outperforms it for n=13n=13. When allowing dependencies across sentence boundaries, the LSTM 1313-gram almost matches the perplexity of the unlimited history LSTM LM. LSTM nn-gram smoothing also has the desirable property of improving with increasing nn-gram order, unlike the Katz or Kneser-Ney back-off estimators. Using multinomial distributions as targets in training instead of the usual one-hot target is only slightly beneficial for low nn-gram orders. Experiments on the One Billion Words benchmark show that the results hold at larger scale: while LSTM smoothing for short nn-gram contexts does not provide significant advantages over classic N-gram models, it becomes effective with long contexts (n>5n > 5); depending on the task and amount of data it can match fully recurrent LSTM models at about n=13n=13. This may have implications when modeling short-format text, e.g. voice search/query LMs. Building LSTM nn-gram LMs may be appealing for some practical situations: the state in a nn-gram LM can be succinctly represented with (n1)4(n-1)*4 bytes storing the identity of the words in the context and batches of nn-gram contexts can be processed in parallel. On the downside, the nn-gram context encoding computed by the LSTM is discarded, making the model more expensive than a regular recurrent LSTM LM.

View on arXiv
Comments on this paper