ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07515
  4. Cited By
RNNs can generate bounded hierarchical languages with optimal memory

RNNs can generate bounded hierarchical languages with optimal memory

15 October 2020
John Hewitt
Michael Hahn
Surya Ganguli
Percy Liang
Christopher D. Manning
    LRM
ArXivPDFHTML

Papers citing "RNNs can generate bounded hierarchical languages with optimal memory"

14 / 14 papers shown
Title
Understanding the Logic of Direct Preference Alignment through Logic
Understanding the Logic of Direct Preference Alignment through Logic
Kyle Richardson
Vivek Srikumar
Ashish Sabharwal
85
2
0
23 Dec 2024
Training Neural Networks as Recognizers of Formal Languages
Training Neural Networks as Recognizers of Formal Languages
Alexandra Butoi
Ghazal Khalighinejad
Anej Svete
Josef Valvoda
Ryan Cotterell
Brian DuSell
NAI
44
2
0
11 Nov 2024
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
Franz Nowak
Anej Svete
Alexandra Butoi
Ryan Cotterell
ReLM
LRM
54
13
0
20 Jun 2024
Separations in the Representational Capabilities of Transformers and
  Recurrent Architectures
Separations in the Representational Capabilities of Transformers and Recurrent Architectures
S. Bhattamishra
Michael Hahn
Phil Blunsom
Varun Kanade
GNN
44
9
0
13 Jun 2024
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
Nadav Borenstein
Anej Svete
R. Chan
Josef Valvoda
Franz Nowak
Isabelle Augenstein
Eleanor Chodroff
Ryan Cotterell
42
12
0
06 Jun 2024
Learned feature representations are biased by complexity, learning
  order, position, and more
Learned feature representations are biased by complexity, learning order, position, and more
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Katherine Hermann
AI4CE
FaML
SSL
OOD
42
6
0
09 May 2024
Recurrent Neural Language Models as Probabilistic Finite-state Automata
Recurrent Neural Language Models as Probabilistic Finite-state Automata
Anej Svete
Ryan Cotterell
42
2
0
08 Oct 2023
Algorithms for Acyclic Weighted Finite-State Automata with Failure Arcs
Algorithms for Acyclic Weighted Finite-State Automata with Failure Arcs
Anej Svete
Benjamin Dayan
Tim Vieira
Ryan Cotterell
Jason Eisner
24
1
0
17 Jan 2023
Benchmarking Compositionality with Formal Languages
Benchmarking Compositionality with Formal Languages
Josef Valvoda
Naomi Saphra
Jonathan Rawski
Adina Williams
Ryan Cotterell
NAI
CoGe
38
9
0
17 Aug 2022
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Jean-Philippe Bernardy
Shalom Lappin
62
1
0
11 Aug 2022
Causal Transformers Perform Below Chance on Recursive Nested
  Constructions, Unlike Humans
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Yair Lakretz
T. Desbordes
Dieuwke Hupkes
S. Dehaene
233
11
0
14 Oct 2021
Thinking Like Transformers
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
35
128
0
13 Jun 2021
The Limitations of Limited Context for Constituency Parsing
The Limitations of Limited Context for Constituency Parsing
Yuchen Li
Andrej Risteski
26
5
0
03 Jun 2021
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Zhehuai Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,750
0
26 Sep 2016
1