ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03817
  4. Cited By
Logical Languages Accepted by Transformer Encoders with Hard Attention

Logical Languages Accepted by Transformer Encoders with Hard Attention

5 October 2023
Pablo Barceló
Alexander Kozachinskiy
Anthony Widjaja Lin
Vladimir Podolskii
ArXivPDFHTML

Papers citing "Logical Languages Accepted by Transformer Encoders with Hard Attention"

6 / 6 papers shown
Title
Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning
Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning
Shaun Baek
Shaun Esua-Mensah
Cyrus Tsui
Sejan Vigneswaralingam
Abdullah Alali
Michael Lu
Vasu Sharma
Sean O'Brien
Kevin Zhu
LRM
140
0
0
25 Mar 2025
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Alireza Amiri
Xinting Huang
Mark Rofin
Michael Hahn
LRM
489
1
0
04 Feb 2025
Tighter Bounds on the Expressivity of Transformer Encoders
Tighter Bounds on the Expressivity of Transformer Encoders
David Chiang
Peter A. Cholak
A. Pillay
79
58
0
25 Jan 2023
Formal Language Recognition by Hard Attention Transformers: Perspectives
  from Circuit Complexity
Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity
Sophie Hao
Dana Angluin
Robert Frank
55
78
0
13 Apr 2022
Overcoming a Theoretical Limitation of Self-Attention
Overcoming a Theoretical Limitation of Self-Attention
David Chiang
Peter A. Cholak
78
84
0
24 Feb 2022
Theoretical Limitations of Self-Attention in Neural Sequence Models
Theoretical Limitations of Self-Attention in Neural Sequence Models
Michael Hahn
68
271
0
16 Jun 2019
1