ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.10627
  4. Cited By
Do Language Models Understand Anything? On the Ability of LSTMs to
  Understand Negative Polarity Items

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

31 August 2018
Jaap Jumelet
Dieuwke Hupkes
ArXivPDFHTML

Papers citing "Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items"

5 / 5 papers shown
Title
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
295
888
0
03 May 2018
Colorless green recurrent networks dream hierarchically
Colorless green recurrent networks dream hierarchically
Kristina Gulordava
Piotr Bojanowski
Edouard Grave
Tal Linzen
Marco Baroni
69
504
0
29 Mar 2018
The Importance of Being Recurrent for Modeling Hierarchical Structure
The Importance of Being Recurrent for Modeling Hierarchical Structure
Ke M. Tran
Arianna Bisazza
Christof Monz
61
150
0
09 Mar 2018
Visualisation and 'diagnostic classifiers' reveal how recurrent and
  recursive neural networks process hierarchical structure
Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure
Dieuwke Hupkes
Sara Veldhoen
Willem H. Zuidema
67
276
0
28 Nov 2017
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
93
901
0
04 Nov 2016
1