10
0

ThinkSwitcher: When to Think Hard, When to Think Fast

Abstract

Large reasoning models (LRMs) excel at solving complex tasks by leveraging long chain-of-thought (CoT) reasoning. However, this often leads to overthinking on simple tasks, resulting in unnecessary computational overhead. We observe that LRMs inherently possess the capability for efficient short CoT reasoning, which can be reliably elicited through prompt design. To leverage this capability, we propose ThinkSwitcher, a framework that enables a single LRM to dynamically switch between short and long CoT modes based on task complexity. ThinkSwitcher introduces a lightweight switching module trained with supervision signals derived from the relative performance of each reasoning mode across tasks. Experiments on multiple reasoning benchmarks show that ThinkSwitcher reduces computational cost by 20-30% while maintaining high accuracy on complex tasks. This demonstrates the effectiveness of ThinkSwitcher as a scalable and efficient solution for unified LRM deployment.

View on arXiv
@article{liang2025_2505.14183,
  title={ ThinkSwitcher: When to Think Hard, When to Think Fast },
  author={ Guosheng Liang and Longguang Zhong and Ziyi Yang and Xiaojun Quan },
  journal={arXiv preprint arXiv:2505.14183},
  year={ 2025 }
}
Comments on this paper