106
0

Accelerated Test-Time Scaling with Model-Free Speculative Sampling

Abstract

Language models have demonstrated remarkable capabilities in reasoning tasks through test-time scaling techniques like best-of-N sampling and tree search. However, these approaches often demand substantial computational resources, creating a critical trade-off between performance and efficiency. We introduce STAND (STochastic Adaptive N-gram Drafting), a novel model-free speculative decoding approach that leverages the inherent redundancy in reasoning trajectories to achieve significant acceleration without compromising accuracy. Our analysis reveals that reasoning paths frequently reuse similar reasoning patterns, enabling efficient model-free token prediction without requiring separate draft models. By introducing stochastic drafting and preserving probabilistic information through a memory-efficient logit-based N-gram module, combined with optimized Gumbel-Top-K sampling and data-driven tree construction, STAND significantly improves token acceptance rates. Extensive evaluations across multiple models and reasoning tasks (AIME-2024, GPQA-Diamond, and LiveCodeBench) demonstrate that STAND reduces inference latency by 60-65% compared to standard autoregressive decoding while maintaining accuracy. Furthermore, STAND outperforms state-of-the-art speculative decoding methods by 14-28% in throughput and shows strong performance even in single-trajectory scenarios, reducing inference latency by 48-58%. As a model-free approach, STAND can be applied to any existing language model without additional training, being a powerful plug-and-play solution for accelerating language model reasoning.

View on arXiv
@article{song2025_2506.04708,
  title={ Accelerated Test-Time Scaling with Model-Free Speculative Sampling },
  author={ Woomin Song and Saket Dingliwal and Sai Muralidhar Jayanthi and Bhavana Ganesh and Jinwoo Shin and Aram Galstyan and Sravan Babu Bodapati },
  journal={arXiv preprint arXiv:2506.04708},
  year={ 2025 }
}
Main:8 Pages
6 Figures
Bibliography:2 Pages
4 Tables
Appendix:1 Pages
Comments on this paper