ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.10749
  4. Cited By
Transformers Learn Shortcuts to Automata

Transformers Learn Shortcuts to Automata

19 October 2022
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
    OffRL
    LRM
ArXivPDFHTML

Papers citing "Transformers Learn Shortcuts to Automata"

50 / 136 papers shown
Title
Explicitly Encoding Structural Symmetry is Key to Length Generalization
  in Arithmetic Tasks
Explicitly Encoding Structural Symmetry is Key to Length Generalization in Arithmetic Tasks
Mahdi Sabbaghi
George Pappas
Hamed Hassani
Surbhi Goel
41
4
0
04 Jun 2024
Contextual Counting: A Mechanistic Study of Transformers on a
  Quantitative Task
Contextual Counting: A Mechanistic Study of Transformers on a Quantitative Task
Siavash Golkar
Alberto Bietti
Mariel Pettee
Michael Eickenberg
M. Cranmer
...
Ruben Ohana
Liam Parker
Bruno Régaldo-Saint Blancard
Kyunghyun Cho
Shirley Ho
47
1
0
30 May 2024
Language Models Need Inductive Biases to Count Inductively
Language Models Need Inductive Biases to Count Inductively
Yingshan Chang
Yonatan Bisk
LRM
32
5
0
30 May 2024
The Expressive Capacity of State Space Models: A Formal Language
  Perspective
The Expressive Capacity of State Space Models: A Formal Language Perspective
Yash Sarrof
Yana Veitsman
Michael Hahn
Mamba
30
7
0
27 May 2024
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory
Nikola Zubić
Federico Soldá
Aurelio Sulser
Davide Scaramuzza
LRM
BDL
50
5
0
26 May 2024
Transformers represent belief state geometry in their residual stream
Transformers represent belief state geometry in their residual stream
A. Shai
Sarah E. Marzen
Lucas Teixeira
Alexander Gietelink Oldenziel
P. Riechers
AI4CE
29
13
0
24 May 2024
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to
  the Edge of Generalization
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
Boshi Wang
Xiang Yue
Yu-Chuan Su
Huan Sun
LRM
29
41
0
23 May 2024
Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing
Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing
Zhongwang Zhang
Pengxiao Lin
Zhiwei Wang
Yaoyu Zhang
Z. Xu
39
3
0
08 May 2024
Towards a Theoretical Understanding of the 'Reversal Curse' via Training
  Dynamics
Towards a Theoretical Understanding of the 'Reversal Curse' via Training Dynamics
Hanlin Zhu
Baihe Huang
Shaolun Zhang
Michael I. Jordan
Jiantao Jiao
Yuandong Tian
Stuart Russell
LRM
AI4CE
49
13
0
07 May 2024
U-Nets as Belief Propagation: Efficient Classification, Denoising, and
  Diffusion in Generative Hierarchical Models
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models
Song Mei
3DV
AI4CE
DiffM
41
11
0
29 Apr 2024
The Illusion of State in State-Space Models
The Illusion of State in State-Space Models
William Merrill
Jackson Petty
Ashish Sabharwal
46
43
0
12 Apr 2024
Semantically-correlated memories in a dense associative model
Semantically-correlated memories in a dense associative model
Thomas F Burns
24
7
0
10 Apr 2024
What Can Transformer Learn with Varying Depth? Case Studies on Sequence
  Learning Tasks
What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks
Xingwu Chen
Difan Zou
ViT
26
12
0
02 Apr 2024
Simulating Weighted Automata over Sequences and Trees with Transformers
Simulating Weighted Automata over Sequences and Trees with Transformers
Michael Rizvi
M. Lizaire
Clara Lacroce
Guillaume Rabusseau
AI4CE
50
0
0
12 Mar 2024
The pitfalls of next-token prediction
The pitfalls of next-token prediction
Gregor Bachmann
Vaishnavh Nagarajan
37
61
0
11 Mar 2024
RNNs are not Transformers (Yet): The Key Bottleneck on In-context
  Retrieval
RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval
Kaiyue Wen
Xingyu Dang
Kaifeng Lyu
44
24
0
28 Feb 2024
How to think step-by-step: A mechanistic understanding of
  chain-of-thought reasoning
How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning
Subhabrata Dutta
Joykirat Singh
Soumen Chakrabarti
Tanmoy Chakraborty
LRM
38
23
0
28 Feb 2024
Do Efficient Transformers Really Save Computation?
Do Efficient Transformers Really Save Computation?
Kai-Bo Yang
Jan Ackermann
Zhenyu He
Guhao Feng
Bohang Zhang
Yunzhen Feng
Qiwei Ye
Di He
Liwei Wang
34
17
0
21 Feb 2024
Chain of Thought Empowers Transformers to Solve Inherently Serial
  Problems
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
Zhiyuan Li
Hong Liu
Denny Zhou
Tengyu Ma
LRM
AI4CE
28
96
0
20 Feb 2024
Transformers Can Achieve Length Generalization But Not Robustly
Transformers Can Achieve Length Generalization But Not Robustly
Yongchao Zhou
Uri Alon
Xinyun Chen
Xuezhi Wang
Rishabh Agarwal
Denny Zhou
46
36
0
14 Feb 2024
Towards an Understanding of Stepwise Inference in Transformers: A
  Synthetic Graph Navigation Model
Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model
Mikail Khona
Maya Okawa
Jan Hula
Rahul Ramesh
Kento Nishi
Robert P. Dick
Ekdeep Singh Lubana
Hidenori Tanaka
43
5
0
12 Feb 2024
Limits of Transformer Language Models on Learning to Compose Algorithms
Limits of Transformer Language Models on Learning to Compose Algorithms
Jonathan Thomm
Aleksandar Terzić
Giacomo Camposampiero
Michael Hersche
Bernhard Schölkopf
Abbas Rahimi
39
3
0
08 Feb 2024
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning
  Tasks
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks
Jongho Park
Jaeseung Park
Zheyang Xiong
Nayoung Lee
Jaewoong Cho
Samet Oymak
Kangwook Lee
Dimitris Papailiopoulos
21
69
0
06 Feb 2024
Discovery of the Hidden World with Large Language Models
Discovery of the Hidden World with Large Language Models
Chenxi Liu
Yongqiang Chen
Tongliang Liu
Mingming Gong
James Cheng
Bo Han
Kun Zhang
CML
65
10
0
06 Feb 2024
Understanding Reasoning Ability of Language Models From the Perspective
  of Reasoning Paths Aggregation
Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation
Xinyi Wang
Alfonso Amayuelas
Kexun Zhang
Liangming Pan
Wenhu Chen
W. Wang
LRM
37
11
0
05 Feb 2024
The Matrix: A Bayesian learning model for LLMs
The Matrix: A Bayesian learning model for LLMs
Siddhartha Dalal
Vishal Misra
21
0
0
05 Feb 2024
Simulation of Graph Algorithms with Looped Transformers
Simulation of Graph Algorithms with Looped Transformers
Artur Back de Luca
K. Fountoulakis
55
14
0
02 Feb 2024
Investigating Recurrent Transformers with Dynamic Halt
Investigating Recurrent Transformers with Dynamic Halt
Jishnu Ray Chowdhury
Cornelia Caragea
39
1
0
01 Feb 2024
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length
  Extrapolation
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation
Zhenyu He
Guhao Feng
Shengjie Luo
Kai-Bo Yang
Liwei Wang
Jingjing Xu
Zhi Zhang
Hongxia Yang
Di He
24
14
0
29 Jan 2024
An Information-Theoretic Analysis of In-Context Learning
An Information-Theoretic Analysis of In-Context Learning
Hong Jun Jeon
Jason D. Lee
Qi Lei
Benjamin Van Roy
27
18
0
28 Jan 2024
Learning Universal Predictors
Learning Universal Predictors
Jordi Grau-Moya
Tim Genewein
Marcus Hutter
Laurent Orseau
Grégoire Delétang
...
Anian Ruoss
Wenliang Kevin Li
Christopher Mattern
Matthew Aitchison
J. Veness
21
11
0
26 Jan 2024
Transformer-Based Models Are Not Yet Perfect At Learning to Emulate
  Structural Recursion
Transformer-Based Models Are Not Yet Perfect At Learning to Emulate Structural Recursion
Dylan Zhang
Curt Tigges
Zory Zhang
Stella Biderman
Maxim Raginsky
Talia Ringer
24
11
0
23 Jan 2024
Extracting Formulae in Many-Valued Logic from Deep Neural Networks
Extracting Formulae in Many-Valued Logic from Deep Neural Networks
Yani Zhang
Helmut Bölcskei
24
0
0
22 Jan 2024
On The Expressivity of Recurrent Neural Cascades
On The Expressivity of Recurrent Neural Cascades
Nadezda A. Knorozova
Alessandro Ronca
18
1
0
14 Dec 2023
Interpretability Illusions in the Generalization of Simplified Models
Interpretability Illusions in the Generalization of Simplified Models
Dan Friedman
Andrew Kyle Lampinen
Lucas Dixon
Danqi Chen
Asma Ghandeharioun
17
14
0
06 Dec 2023
Transformers are uninterpretable with myopic methods: a case study with
  bounded Dyck grammars
Transformers are uninterpretable with myopic methods: a case study with bounded Dyck grammars
Kaiyue Wen
Yuchen Li
Bing Liu
Andrej Risteski
21
21
0
03 Dec 2023
Compositional Capabilities of Autoregressive Transformers: A Study on
  Synthetic, Interpretable Tasks
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks
Rahul Ramesh
Ekdeep Singh Lubana
Mikail Khona
Robert P. Dick
Hidenori Tanaka
CoGe
33
6
0
21 Nov 2023
Looped Transformers are Better at Learning Learning Algorithms
Looped Transformers are Better at Learning Learning Algorithms
Liu Yang
Kangwook Lee
Robert D. Nowak
Dimitris Papailiopoulos
24
24
0
21 Nov 2023
Generative learning for nonlinear dynamics
Generative learning for nonlinear dynamics
William Gilpin
AI4CE
PINN
60
24
0
07 Nov 2023
What Formal Languages Can Transformers Express? A Survey
What Formal Languages Can Transformers Express? A Survey
Lena Strobl
William Merrill
Gail Weiss
David Chiang
Dana Angluin
AI4CE
20
48
0
01 Nov 2023
In-Context Learning Dynamics with Random Binary Sequences
In-Context Learning Dynamics with Random Binary Sequences
Eric J. Bigelow
Ekdeep Singh Lubana
Robert P. Dick
Hidenori Tanaka
T. Ullman
29
4
0
26 Oct 2023
Transformers as Decision Makers: Provable In-Context Reinforcement
  Learning via Supervised Pretraining
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining
Licong Lin
Yu Bai
Song Mei
OffRL
32
43
0
12 Oct 2023
Do pretrained Transformers Learn In-Context by Gradient Descent?
Do pretrained Transformers Learn In-Context by Gradient Descent?
Lingfeng Shen
Aayush Mishra
Daniel Khashabi
36
7
0
12 Oct 2023
The Expressive Power of Transformers with Chain of Thought
The Expressive Power of Transformers with Chain of Thought
William Merrill
Ashish Sabharwal
LRM
AI4CE
ReLM
19
41
0
11 Oct 2023
Sparse Universal Transformer
Sparse Universal Transformer
Shawn Tan
Yikang Shen
Zhenfang Chen
Aaron Courville
Chuang Gan
MoE
32
13
0
11 Oct 2023
Understanding In-Context Learning in Transformers and LLMs by Learning
  to Learn Discrete Functions
Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions
S. Bhattamishra
Arkil Patel
Phil Blunsom
Varun Kanade
19
41
0
04 Oct 2023
Boolformer: Symbolic Regression of Logic Functions with Transformers
Boolformer: Symbolic Regression of Logic Functions with Transformers
Stéphane dÁscoli
Samy Bengio
Josh Susskind
Emmanuel Abbe
19
5
0
21 Sep 2023
Deep Networks as Denoising Algorithms: Sample-Efficient Learning of
  Diffusion Models in High-Dimensional Graphical Models
Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models
Song Mei
Yuchen Wu
DiffM
28
26
0
20 Sep 2023
Auto-Regressive Next-Token Predictors are Universal Learners
Auto-Regressive Next-Token Predictors are Universal Learners
Eran Malach
LRM
24
36
0
13 Sep 2023
Breaking through the learning plateaus of in-context learning in
  Transformer
Breaking through the learning plateaus of in-context learning in Transformer
Jingwen Fu
Tao Yang
Yuwang Wang
Yan Lu
Nanning Zheng
30
1
0
12 Sep 2023
Previous
123
Next