ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.00560
  4. Cited By
A Theory for Length Generalization in Learning to Reason

A Theory for Length Generalization in Learning to Reason

31 March 2024
Changnan Xiao
Bing Liu
    LRM
ArXivPDFHTML

Papers citing "A Theory for Length Generalization in Learning to Reason"

12 / 12 papers shown
Title
Chain-of-Thought Prompting for Out-of-Distribution Samples: A Latent-Variable Study
Chain-of-Thought Prompting for Out-of-Distribution Samples: A Latent-Variable Study
Yu Wang
Fu-Chieh Chang
Pei-Yuan Wu
OODD
ReLM
LRM
51
0
0
17 Apr 2025
Language Models, Graph Searching, and Supervision Adulteration: When More Supervision is Less and How to Make More More
Arvid Frydenlund
LRM
48
0
0
13 Mar 2025
The Lookahead Limitation: Why Multi-Operand Addition is Hard for LLMs
The Lookahead Limitation: Why Multi-Operand Addition is Hard for LLMs
Tanja Baeumel
Josef van Genabith
Simon Ostermann
LRM
62
1
0
27 Feb 2025
Unraveling Arithmetic in Large Language Models: The Role of Algebraic Structures
Unraveling Arithmetic in Large Language Models: The Role of Algebraic Structures
Fu-Chieh Chang
Pei-Yuan Wu
Pei-Yuan Wu
LRM
104
1
0
25 Nov 2024
RL-STaR: Theoretical Analysis of Reinforcement Learning Frameworks for Self-Taught Reasoner
RL-STaR: Theoretical Analysis of Reinforcement Learning Frameworks for Self-Taught Reasoner
Fu-Chieh Chang
Yu-Ting Lee
Hui-Ying Shih
Pei-Yuan Wu
Pei-Yuan Wu
OffRL
LRM
142
0
0
31 Oct 2024
ATHENA: Mathematical Reasoning with Thought Expansion
ATHENA: Mathematical Reasoning with Thought Expansion
JB. Kim
Hazel Kim
Joonghyuk Hahn
Yo-Sub Han
ReLM
LRM
AIMat
40
6
0
02 Nov 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
217
499
0
03 May 2023
Binding Language Models in Symbolic Languages
Binding Language Models in Symbolic Languages
Zhoujun Cheng
Tianbao Xie
Peng Shi
Chengzu Li
Rahul Nadkarni
...
Dragomir R. Radev
Mari Ostendorf
Luke Zettlemoyer
Noah A. Smith
Tao Yu
LMTD
113
197
0
06 Oct 2022
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of
  Chain-of-Thought
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought
Abulhair Saparov
He He
ELM
LRM
ReLM
121
275
0
03 Oct 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
314
3,237
0
21 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
355
8,457
0
28 Jan 2022
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
245
695
0
27 Aug 2021
1