ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.02611
  4. Cited By
Evaluating the Ability of LSTMs to Learn Context-Free Grammars

Evaluating the Ability of LSTMs to Learn Context-Free Grammars

6 November 2018
Luzi Sennhauser
Robert C. Berwick
ArXivPDFHTML

Papers citing "Evaluating the Ability of LSTMs to Learn Context-Free Grammars"

5 / 5 papers shown
Title
Simplicity Bias in Transformers and their Ability to Learn Sparse
  Boolean Functions
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S. Bhattamishra
Arkil Patel
Varun Kanade
Phil Blunsom
22
44
0
22 Nov 2022
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Jean-Philippe Bernardy
Shalom Lappin
32
1
0
11 Aug 2022
Thinking Like Transformers
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
35
127
0
13 Jun 2021
On the Computational Power of Transformers and its Implications in
  Sequence Modeling
On the Computational Power of Transformers and its Implications in Sequence Modeling
S. Bhattamishra
Arkil Patel
Navin Goyal
33
64
0
16 Jun 2020
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck
  Languages
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Mirac Suzgun
Sebastian Gehrmann
Yonatan Belinkov
Stuart M. Shieber
13
50
0
08 Nov 2019
1