ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.05785
  4. Cited By
Limits of Transformer Language Models on Learning to Compose Algorithms
v1v2v3 (latest)

Limits of Transformer Language Models on Learning to Compose Algorithms

8 February 2024
Jonathan Thomm
Aleksandar Terzić
Giacomo Camposampiero
Michael Hersche
Bernhard Schölkopf
Abbas Rahimi
ArXiv (abs)PDFHTML

Papers citing "Limits of Transformer Language Models on Learning to Compose Algorithms"

5 / 5 papers shown
Title
Behavioural vs. Representational Systematicity in End-to-End Models: An Opinionated Survey
Ivan Vegner
Sydelle de Souza
Valentin Forch
Martha Lewis
Leonidas A.A. Doumas
59
0
0
04 Jun 2025
Continuous Chain of Thought Enables Parallel Exploration and Reasoning
Continuous Chain of Thought Enables Parallel Exploration and Reasoning
Halil Alperen Gozeten
M. E. Ildiz
Xuechen Zhang
Hrayr Harutyunyan
A. S. Rawat
Samet Oymak
LRM
75
0
0
29 May 2025
Lost in Transmission: When and Why LLMs Fail to Reason Globally
Lost in Transmission: When and Why LLMs Fail to Reason Globally
Tobias Schnabel
Kiran Tomlinson
Adith Swaminathan
Jennifer Neville
LRM
156
2
0
13 May 2025
Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty?
Can Large Reasoning Models do Analogical Reasoning under Perceptual Uncertainty?
Giacomo Camposampiero
Michael Hersche
Roger Wattenhofer
Abu Sebastian
Abbas Rahimi
LRM
111
2
0
14 Mar 2025
MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs
MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs
Andreas Opedal
Haruki Shirakami
Bernhard Schölkopf
Abulhair Saparov
Mrinmaya Sachan
LRM
152
3
0
17 Feb 2025
1