Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.10627
Cited By
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
31 August 2018
Jaap Jumelet
Dieuwke Hupkes
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items"
5 / 5 papers shown
Title
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
292
888
0
03 May 2018
Colorless green recurrent networks dream hierarchically
Kristina Gulordava
Piotr Bojanowski
Edouard Grave
Tal Linzen
Marco Baroni
69
504
0
29 Mar 2018
The Importance of Being Recurrent for Modeling Hierarchical Structure
Ke M. Tran
Arianna Bisazza
Christof Monz
61
150
0
09 Mar 2018
Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure
Dieuwke Hupkes
Sara Veldhoen
Willem H. Zuidema
67
276
0
28 Nov 2017
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
93
901
0
04 Nov 2016
1