56
0
v1v2 (latest)

CoT is Not True Reasoning, It Is Just a Tight Constraint to Imitate: A Theory Perspective

Main:3 Pages
Bibliography:2 Pages
1 Tables
Appendix:1 Pages
Abstract

Chain-of-Thought (CoT) prompting has demonstrably enhanced the performance of Large Language Models on tasks requiring multi-step inference. This success has led to widespread claims of emergent reasoning capabilities in these models. In this paper, we present a theoretical counter-perspective: Chain-of-Thought (CoT) does not elicit genuine, abstract reasoning. Instead, we argue that Chain-of-Thought functions as a powerful structural constraint that guides Large Language Models to imitate the form of reasoning. By forcing the generation of intermediate steps, Chain-of-Thought leverages the model immense capacity for sequence prediction and pattern matching, effectively constraining its output to sequences that resemble coherent thought processes. Chain-of-Thought (CoT) prompting has demonstrably enhanced the performance of Large Language Models on tasks requiring multi-step inference. This success has led to widespread claims of emergent reasoning capabilities in these models. In this paper, we present a theoretical counter-perspective: Chain-of-Thought (CoT) does not elicit genuine, abstract reasoning. Instead, we argue that Chain-of-Thought functions as a powerful structural constraint that guides Large Language Models to imitate the form of reasoning. By forcing the generation of intermediate steps, Chain-of-Thought leverages the model immense capacity for sequence prediction and pattern matching, effectively constraining its output to sequences that resemble coherent thought processes.

View on arXiv
@article{shao2025_2506.02878,
  title={ CoT is Not True Reasoning, It Is Just a Tight Constraint to Imitate: A Theory Perspective },
  author={ Jintian Shao and Yiming Cheng },
  journal={arXiv preprint arXiv:2506.02878},
  year={ 2025 }
}
Comments on this paper