23
0

When Can Model-Free Reinforcement Learning be Enough for Thinking?

Main:8 Pages
2 Figures
Bibliography:5 Pages
2 Tables
Appendix:2 Pages
Abstract

Recent work on large language models has demonstrated the use of model-free reinforcement learning (RL) to train reasoning-like capabilities. The emergence of "thinking" through model-free RL is interesting as thinking actions neither produce reward nor change the external world state to one where the agent is more likely to get reward. This paper seeks to build a domain-independent understanding of when model-free RL will lead to "thinking" as a strategy for reward maximization. To build this understanding, we first introduce a theoretical model which we call a \textit{thought Markov decision process} (MDP). Thought MDPs minimally extend the classical MDP model to include an abstract notion of thought state and thought action. Using the thought MDP model, we prove the importance of policy initialization in determining whether or not thinking emerges and show formally that thought actions are equivalent to the agent choosing to perform a step of policy improvement before continuing to act. We then show that open-source LLMs satisfy the conditions that our theory predicts are necessary for model-free RL to produce thinking-like behavior. Finally, we hypothesize sufficient conditions that would enable thinking to be learned outside of language generation and introduce a toy domain where a combination of multi-task pre-training and designated thought actions enable more data-efficient RL compared to non-thinking agents.

View on arXiv
@article{hanna2025_2506.17124,
  title={ When Can Model-Free Reinforcement Learning be Enough for Thinking? },
  author={ Josiah P. Hanna and Nicholas E. Corrado },
  journal={arXiv preprint arXiv:2506.17124},
  year={ 2025 }
}
Comments on this paper