Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.02611
Cited By
Evaluating the Ability of LSTMs to Learn Context-Free Grammars
6 November 2018
Luzi Sennhauser
Robert C. Berwick
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating the Ability of LSTMs to Learn Context-Free Grammars"
5 / 5 papers shown
Title
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S. Bhattamishra
Arkil Patel
Varun Kanade
Phil Blunsom
22
44
0
22 Nov 2022
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Jean-Philippe Bernardy
Shalom Lappin
32
1
0
11 Aug 2022
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
35
127
0
13 Jun 2021
On the Computational Power of Transformers and its Implications in Sequence Modeling
S. Bhattamishra
Arkil Patel
Navin Goyal
33
64
0
16 Jun 2020
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Mirac Suzgun
Sebastian Gehrmann
Yonatan Belinkov
Stuart M. Shieber
16
50
0
08 Nov 2019
1