ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.03429
  4. Cited By
On the Turing Completeness of Modern Neural Network Architectures

On the Turing Completeness of Modern Neural Network Architectures

10 January 2019
Jorge A. Pérez
Javier Marinkovic
Pablo Barceló
    BDL
ArXivPDFHTML

Papers citing "On the Turing Completeness of Modern Neural Network Architectures"

30 / 30 papers shown
Title
Can Large Language Models Learn Formal Logic? A Data-Driven Training and Evaluation Framework
Can Large Language Models Learn Formal Logic? A Data-Driven Training and Evaluation Framework
Yuan Xia
Akanksha Atrey
Fadoua Khmaissia
Kedar S. Namjoshi
LRM
ELM
45
0
0
28 Apr 2025
Looped ReLU MLPs May Be All You Need as Practical Programmable Computers
Looped ReLU MLPs May Be All You Need as Practical Programmable Computers
Yingyu Liang
Zhizhou Sha
Zhenmei Shi
Zhao-quan Song
Yufa Zhou
96
18
0
21 Feb 2025
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Alireza Amiri
Xinting Huang
Mark Rofin
Michael Hahn
LRM
180
0
0
04 Feb 2025
Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data?
Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data?
Yutong Yin
Zhaoran Wang
LRM
ReLM
143
0
0
27 Jan 2025
Can Transformers Reason Logically? A Study in SAT Solving
Can Transformers Reason Logically? A Study in SAT Solving
Leyan Pan
Vijay Ganesh
Jacob Abernethy
Chris Esposo
Wenke Lee
ReLM
LRM
33
0
0
09 Oct 2024
On the Complexity of Neural Computation in Superposition
On the Complexity of Neural Computation in Superposition
Micah Adler
Nir Shavit
115
3
0
05 Sep 2024
LifeGPT: Topology-Agnostic Generative Pretrained Transformer Model for
  Cellular Automata
LifeGPT: Topology-Agnostic Generative Pretrained Transformer Model for Cellular Automata
Jaime Berkovich
Markus J. Buehler
AI4CE
30
2
0
03 Sep 2024
Representing Rule-based Chatbots with Transformers
Representing Rule-based Chatbots with Transformers
Dan Friedman
Abhishek Panigrahi
Danqi Chen
63
1
0
15 Jul 2024
Separations in the Representational Capabilities of Transformers and
  Recurrent Architectures
Separations in the Representational Capabilities of Transformers and Recurrent Architectures
S. Bhattamishra
Michael Hahn
Phil Blunsom
Varun Kanade
GNN
41
9
0
13 Jun 2024
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory
Nikola Zubić
Federico Soldá
Aurelio Sulser
Davide Scaramuzza
LRM
BDL
52
5
0
26 May 2024
Chain of Thought Empowers Transformers to Solve Inherently Serial
  Problems
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
Zhiyuan Li
Hong Liu
Denny Zhou
Tengyu Ma
LRM
AI4CE
28
96
0
20 Feb 2024
Sample, estimate, aggregate: A recipe for causal discovery foundation models
Sample, estimate, aggregate: A recipe for causal discovery foundation models
Menghua Wu
Yujia Bao
Regina Barzilay
Tommi Jaakkola
CML
49
7
0
02 Feb 2024
Transformers as Decision Makers: Provable In-Context Reinforcement
  Learning via Supervised Pretraining
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining
Licong Lin
Yu Bai
Song Mei
OffRL
32
43
0
12 Oct 2023
Self-attention Dual Embedding for Graphs with Heterophily
Self-attention Dual Embedding for Graphs with Heterophily
Yurui Lai
Taiyan Zhang
Rui Fan
GNN
35
0
0
28 May 2023
E(n)-equivariant Graph Neural Cellular Automata
E(n)-equivariant Graph Neural Cellular Automata
G. Gala
Daniele Grattarola
Erik Quaeghebeur
GNN
45
3
0
25 Jan 2023
Memory Augmented Large Language Models are Computationally Universal
Memory Augmented Large Language Models are Computationally Universal
Dale Schuurmans
32
45
0
10 Jan 2023
Attention-based Neural Cellular Automata
Attention-based Neural Cellular Automata
Mattie Tesfaldet
Derek Nowrouzezahrai
C. Pal
ViT
31
17
0
02 Nov 2022
HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon
  Prediction of Event Sequences
HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences
Siqiao Xue
X. Shi
James Y. Zhang
Hongyuan Mei
AI4TS
19
34
0
04 Oct 2022
Provably expressive temporal graph networks
Provably expressive temporal graph networks
Amauri Souza
Diego Mesquita
Samuel Kaski
Vikas K. Garg
89
54
0
29 Sep 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
24
449
0
01 Aug 2022
Formal Language Recognition by Hard Attention Transformers: Perspectives
  from Circuit Complexity
Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity
Sophie Hao
Dana Angluin
Robert Frank
11
71
0
13 Apr 2022
Can Vision Transformers Perform Convolution?
Can Vision Transformers Perform Convolution?
Shanda Li
Xiangning Chen
Di He
Cho-Jui Hsieh
ViT
46
19
0
02 Nov 2021
Pairing Conceptual Modeling with Machine Learning
Pairing Conceptual Modeling with Machine Learning
W. Maass
V. Storey
HAI
24
33
0
27 Jun 2021
Vector Symbolic Architectures as a Computing Framework for Emerging
  Hardware
Vector Symbolic Architectures as a Computing Framework for Emerging Hardware
Denis Kleyko
Mike Davies
E. P. Frady
P. Kanerva
Spencer J. Kent
...
Evgeny Osipov
J. Rabaey
D. Rachkovskij
Abbas Rahimi
Friedrich T. Sommer
32
57
0
09 Jun 2021
On the Expressive Power of Self-Attention Matrices
On the Expressive Power of Self-Attention Matrices
Valerii Likhosherstov
K. Choromanski
Adrian Weller
37
33
0
07 Jun 2021
Attention is Not All You Need: Pure Attention Loses Rank Doubly
  Exponentially with Depth
Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
Yihe Dong
Jean-Baptiste Cordonnier
Andreas Loukas
37
373
0
05 Mar 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
F. Khan
M. Shah
ViT
227
2,430
0
04 Jan 2021
On the Computational Power of Transformers and its Implications in
  Sequence Modeling
On the Computational Power of Transformers and its Implications in Sequence Modeling
S. Bhattamishra
Arkil Patel
Navin Goyal
25
64
0
16 Jun 2020
How hard is to distinguish graphs with graph neural networks?
How hard is to distinguish graphs with graph neural networks?
Andreas Loukas
GNN
25
6
0
13 May 2020
It's Not What Machines Can Learn, It's What We Cannot Teach
It's Not What Machines Can Learn, It's What We Cannot Teach
Gal Yehuda
Moshe Gabel
Assaf Schuster
FaML
14
37
0
21 Feb 2020
1