59
1

From Chaos to Order: The Atomic Reasoner Framework for Fine-grained Reasoning in Large Language Models

Abstract

Recent advances in large language models (LLMs) have shown remarkable progress, yet their capacity for logical ``slow-thinking'' reasoning persists as a critical research frontier. Current inference scaling paradigms suffer from two fundamental constraints: fragmented thought flows compromising logical coherence, and intensively computational complexity that escalates with search space dimensions. To overcome these limitations, we present \textbf{Atomic Reasoner} (\textbf{AR}), a cognitive inference strategy that enables fine-grained reasoning through systematic atomic-level operations. AR decomposes the reasoning process into atomic cognitive units, employing a cognitive routing mechanism to dynamically construct reasoning representations and orchestrate inference pathways. This systematic methodology implements stepwise, structured cognition, which ensures logical coherence while significantly reducing cognitive load, effectively simulating the cognitive patterns observed in human deep thinking processes. Extensive experimental results demonstrate AR's superior reasoning capabilities without the computational burden of exhaustive solution searches, particularly excelling in linguistic logic puzzles. These findings substantiate AR's effectiveness in enhancing LLMs' capacity for robust, long-sequence logical reasoning and deliberation.

View on arXiv
@article{liu2025_2503.15944,
  title={ From Chaos to Order: The Atomic Reasoner Framework for Fine-grained Reasoning in Large Language Models },
  author={ Jinyi Liu and Yan Zheng and Rong Cheng and Qiyu Wu and Wei Guo and Fei Ni and Hebin Liang and Yifu Yuan and Hangyu Mao and Fuzheng Zhang and Jianye Hao },
  journal={arXiv preprint arXiv:2503.15944},
  year={ 2025 }
}
Comments on this paper