10
0

SPECS\texttt{SPECS}: Faster Test-Time Scaling through Speculative Drafts

Main:6 Pages
6 Figures
Bibliography:4 Pages
3 Tables
Appendix:18 Pages
Abstract

Scaling test-time compute has driven the recent advances in the reasoning capabilities of large language models (LLMs), typically by allocating additional computation for more thorough exploration. However, increased compute often comes at the expense of higher user-facing latency, directly impacting user experience. Current test-time scaling methods primarily optimize for accuracy based on total compute resources (FLOPS), often overlooking latency constraints. To address this gap, we propose SPECS\texttt{SPECS}, a latency-aware test-time scaling method inspired by speculative decoding. SPECS\texttt{SPECS}~uses a smaller, faster model to generate candidate sequences efficiently, and evaluates these candidates using signals from both a larger target model and a dedicated reward model. We introduce new integration strategies, including reward-guided soft verification and a reward-based deferral mechanism. Empirical results on MATH500, AMC23 and OlympiadBench datasets show that SPECS\texttt{SPECS}~matches or surpasses beam search accuracy while reducing latency by up to \sim19.1\%. Our theoretical analysis shows that our algorithm converges to the solution of a KL-regularized reinforcement learning objective with increasing beam width.

View on arXiv
@article{cemri2025_2506.15733,
  title={ $\texttt{SPECS}$: Faster Test-Time Scaling through Speculative Drafts },
  author={ Mert Cemri and Nived Rajaraman and Rishabh Tiwari and Xiaoxuan Liu and Kurt Keutzer and Ion Stoica and Kannan Ramchandran and Ahmad Beirami and Ziteng Sun },
  journal={arXiv preprint arXiv:2506.15733},
  year={ 2025 }
}
Comments on this paper