130
0

Plan and Budget: Effective and Efficient Test-Time Scaling on Large Language Model Reasoning

Abstract

Large Language Models (LLMs) have achieved remarkable success in complex reasoning tasks, but their inference remains computationally inefficient. We observe a common failure mode in many prevalent LLMs, overthinking, where models generate verbose and tangential reasoning traces even for simple queries. Recent works have tried to mitigate this by enforcing fixed token budgets, however, this can lead to underthinking, especially on harder problems. Through empirical analysis, we identify that this inefficiency often stems from unclear problem-solving strategies. To formalize this, we develop a theoretical model, BBAM (Bayesian Budget Allocation Model), which models reasoning as a sequence of sub-questions with varying uncertainty, and introduce the E3E^3 metric to capture the trade-off between correctness and computation efficiency. Building on theoretical results from BBAM, we propose Plan-and-Budget, a model-agnostic, test-time framework that decomposes complex queries into sub-questions and allocates token budgets based on estimated complexity using adaptive scheduling. Plan-and-Budget improves reasoning efficiency across a range of tasks and models, achieving up to +70% accuracy gains, -39% token reduction, and +187.5% improvement in E3E^3. Notably, it elevates a smaller model (DS-Qwen-32B) to match the efficiency of a larger model (DS-LLaMA-70B)-demonstrating Plan-and-Budget's ability to close performance gaps without retraining. Our code is available atthis http URL.

View on arXiv
@article{lin2025_2505.16122,
  title={ Plan and Budget: Effective and Efficient Test-Time Scaling on Large Language Model Reasoning },
  author={ Junhong Lin and Xinyue Zeng and Jie Zhu and Song Wang and Julian Shun and Jun Wu and Dawei Zhou },
  journal={arXiv preprint arXiv:2505.16122},
  year={ 2025 }
}
Comments on this paper