Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.13897
Cited By
Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages
21 October 2023
Andy Yang
David Chiang
Dana Angluin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages"
9 / 9 papers shown
Title
Exact Expressive Power of Transformers with Padding
William Merrill
Ashish Sabharwal
22
0
0
25 May 2025
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Alireza Amiri
Xinting Huang
Mark Rofin
Michael Hahn
LRM
400
1
0
04 Feb 2025
Training Neural Networks as Recognizers of Formal Languages
Alexandra Butoi
Ghazal Khalighinejad
Anej Svete
Josef Valvoda
Ryan Cotterell
Brian DuSell
NAI
58
5
0
11 Nov 2024
Representing Rule-based Chatbots with Transformers
Dan Friedman
Abhishek Panigrahi
Danqi Chen
95
1
0
15 Jul 2024
What Algorithms can Transformers Learn? A Study in Length Generalization
Hattie Zhou
Arwen Bradley
Etai Littwin
Noam Razin
Omid Saremi
Josh Susskind
Samy Bengio
Preetum Nakkiran
42
118
0
24 Oct 2023
Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity
Sophie Hao
Dana Angluin
Robert Frank
23
75
0
13 Apr 2022
Overcoming a Theoretical Limitation of Self-Attention
David Chiang
Peter A. Cholak
48
81
0
24 Feb 2022
Theoretical Limitations of Self-Attention in Neural Sequence Models
Michael Hahn
35
266
0
16 Jun 2019
Layer Normalization
Jimmy Lei Ba
J. Kiros
Geoffrey E. Hinton
194
10,412
0
21 Jul 2016
1