How does Transformer Learn Implicit Reasoning?
- OffRLReLMLRM

Recent work suggests that large language models (LLMs) can perform multi-hop reasoning implicitly -- producing correct answers without explicitly verbalizing intermediate steps -- but the underlying mechanisms remain poorly understood. In this paper, we study how such implicit reasoning emerges by training transformers from scratch in a controlled symbolic environment. Our analysis reveals a three-stage developmental trajectory: early memorization, followed by in-distribution generalization, and eventually cross-distribution generalization. We find that training with atomic triples is not necessary but accelerates learning, and that second-hop generalization relies on query-level exposure to specific compositional structures. To interpret these behaviors, we introduce two diagnostic tools: cross-query semantic patching, which identifies semantically reusable intermediate representations, and a cosine-based representational lens, which reveals that successful reasoning correlates with the cosine-base clustering in hidden space. This clustering phenomenon in turn provides a coherent explanation for the behavioral dynamics observed across training, linking representational structure to reasoning capability. These findings provide new insights into the interpretability of implicit multi-hop reasoning in LLMs, helping to clarify how complex reasoning processes unfold internally and offering pathways to enhance the transparency of such models.
View on arXiv@article{ye2025_2505.23653, title={ How does Transformer Learn Implicit Reasoning? }, author={ Jiaran Ye and Zijun Yao and Zhidian Huang and Liangming Pan and Jinxin Liu and Yushi Bai and Amy Xin and Liu Weichuan and Xiaoyin Che and Lei Hou and Juanzi Li }, journal={arXiv preprint arXiv:2505.23653}, year={ 2025 } }