DynScaling: Efficient Verifier-free Inference Scaling via Dynamic and Integrated Sampling
- LRM

Inference-time scaling has proven effective in boosting large language model (LLM) performance through increased test-time computation. Yet, its practical application is often hindered by reliance on external verifiers or a lack of optimization for realistic computational constraints. We propose DynScaling, which addresses these limitations through two primary innovations: an integrated parallel-sequential sampling strategy and a bandit-based dynamic budget allocation framework. The integrated sampling strategy unifies parallel and sequential sampling by constructing synthetic sequential reasoning chains from initially independent parallel responses, promoting diverse and coherent reasoning trajectories. The dynamic budget allocation framework formulates the allocation of computational resources as a multi-armed bandit problem, adaptively distributing the inference budget across queries based on the uncertainty of previously sampled responses, thereby maximizing computational efficiency. By combining these components, DynScaling effectively improves LLM performance under practical resource constraints without the need for external verifiers. Experimental results demonstrate that DynScaling consistently surpasses existing verifier-free inference scaling baselines in both task performance and computational cost.
View on arXiv@article{wang2025_2506.16043, title={ DynScaling: Efficient Verifier-free Inference Scaling via Dynamic and Integrated Sampling }, author={ Fei Wang and Xingchen Wan and Ruoxi Sun and Jiefeng Chen and Sercan Ö. Arık }, journal={arXiv preprint arXiv:2506.16043}, year={ 2025 } }