Representation of linguistic form and function in recurrent neural
networks
We present novel methods for analysing the activation patterns of RNNs and identifying the types of linguistic structure they learn. As a case study, we use a multi-task gated recurrent network model consisting of two parallel pathways with shared word embeddings trained on predicting the representations of the visual scene corresponding to an input sentence, and predicting the next word in the same sentence. We show that the image prediction pathway is sensitive to the information structure of the sentence, and pays selective attention to lexical categories and grammatical functions that carry semantic information. It also learns to treat the same input token differently depending on its grammatical functions in the sentence. The language model is comparatively more sensitive to words with a syntactic function. Our analysis of the function of individual hidden units shows that each pathway contains specialized units tuned to patterns informative for the task, some of which can carry activations to later time steps to encode long-term dependencies.
View on arXiv