Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.07432
Cited By
v1
v2 (latest)
Can Transformers Reason Logically? A Study in SAT Solving
9 October 2024
Leyan Pan
Vijay Ganesh
Jacob Abernethy
Chris Esposo
Wenke Lee
ReLM
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Can Transformers Reason Logically? A Study in SAT Solving"
15 / 15 papers shown
Title
SATURN: SAT-based Reinforcement Learning to Unleash Language Model Reasoning
Huanyu Liu
Jia Li
Hao Zhu
Kechi Zhang
Yihong Dong
Ge Li
OffRL
ReLM
LRM
80
0
0
22 May 2025
SATBench: Benchmarking LLMs' Logical Reasoning via Automated Puzzle Generation from SAT Formulas
Anjiang Wei
Yuheng Wu
Yingjia Wan
Tarun Suresh
Huanmi Tan
Zhanke Zhou
Sanmi Koyejo
Ke Wang
Ke Wang
ReLM
LRM
67
1
0
20 May 2025
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
Zhiyuan Li
Hong Liu
Denny Zhou
Tengyu Ma
LRM
AI4CE
71
133
0
20 Feb 2024
The Expressive Power of Transformers with Chain of Thought
William Merrill
Ashish Sabharwal
LRM
AI4CE
ReLM
81
41
0
11 Oct 2023
Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective
Guhao Feng
Bohang Zhang
Yuntian Gu
Haotian Ye
Di He
Liwei Wang
LRM
113
259
0
24 May 2023
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
Thibaut Lavril
Gautier Izacard
Xavier Martinet
Marie-Anne Lachaux
...
Faisal Azhar
Aurelien Rodriguez
Armand Joulin
Edouard Grave
Guillaume Lample
ALM
PILM
1.5K
13,472
0
27 Feb 2023
Looped Transformers as Programmable Computers
Angeliki Giannou
Shashank Rajput
Jy-yong Sohn
Kangwook Lee
Jason D. Lee
Dimitris Papailiopoulos
83
106
0
30 Jan 2023
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
142
178
0
19 Oct 2022
The Parallelism Tradeoff: Limitations of Log-Precision Transformers
William Merrill
Ashish Sabharwal
85
114
0
02 Jul 2022
On the Paradox of Learning to Reason from Data
Honghua Zhang
Liunian Harold Li
Tao Meng
Kai-Wei Chang
Guy Van den Broeck
NAI
ReLM
OOD
LRM
201
108
0
23 May 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
850
9,683
0
28 Jan 2022
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
110
135
0
13 Jun 2021
Self-Attention Networks Can Process Bounded Hierarchical Languages
Shunyu Yao
Binghui Peng
Christos H. Papadimitriou
Karthik Narasimhan
65
83
0
24 May 2021
On the Turing Completeness of Modern Neural Network Architectures
Jorge A. Pérez
Javier Marinkovic
Pablo Barceló
BDL
65
146
0
10 Jan 2019
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
803
132,454
0
12 Jun 2017
1