97

Increasing the Thinking Budget is Not All You Need

Ignacio Iacobacci
Zhaozhi Qian
Faroq AL-Tam
Muhammad AL-Qurishi
Riad Souissi
Main:4 Pages
5 Figures
Bibliography:1 Pages
3 Tables
Appendix:4 Pages
Abstract

Recently, a new wave of thinking-capable Large Language Models has emerged, demonstrating exceptional capabilities across a wide range of reasoning benchmarks. Early studies have begun to explore how the amount of compute in terms of the length of the reasoning process, the so-called thinking budget, impacts model performance. In this work, we propose a systematic investigation of the thinking budget as a key parameter, examining its interaction with various configurations such as self-consistency, reflection, and others. Our goal is to provide an informative, balanced comparison framework that considers both performance outcomes and computational cost. Among our findings, we discovered that simply increasing the thinking budget is not the most effective use of compute. More accurate responses can instead be achieved through alternative configurations, such as self-consistency and self-reflection.

View on arXiv
Comments on this paper