51
0

Internal Bias in Reasoning Models leads to Overthinking

Abstract

While current reasoning models possess strong exploratory capabilities, they are often criticized for overthinking due to redundant and unnecessary reflections. In this work, we reveal for the first time that overthinking in reasoning models may stem from their internal bias towards input texts. Upon encountering a reasoning problem, the model immediately forms a preliminary guess about the answer, which we term as an internal bias since it is not derived through actual reasoning. When this guess conflicts with its reasoning result, the model tends to engage in reflection, leading to the waste of computational resources. Through further interpretability experiments, we find that this behavior is largely driven by the model's excessive attention to the input section, which amplifies the influence of internal bias on its decision-making process. Additionally, by masking out the original input section, the affect of internal bias can be effectively alleviated and the reasoning length could be reduced by 31%-53% across different complex reasoning tasks. Notably, in most cases, this approach also leads to improvements in accuracy. These findings demonstrate a causal relationship between internal bias and overthinking.

View on arXiv
@article{dang2025_2505.16448,
  title={ Internal Bias in Reasoning Models leads to Overthinking },
  author={ Renfei Dang and Shujian Huang and Jiajun Chen },
  journal={arXiv preprint arXiv:2505.16448},
  year={ 2025 }
}
Comments on this paper