16
0

SSR: Speculative Parallel Scaling Reasoning in Test-time

Abstract

Large language models (LLMs) have achieved impressive results on multi-step mathematical reasoning, yet at the cost of high computational overhead. This challenge is particularly acute for test-time scaling methods such as parallel decoding, which increase answer diversity but scale poorly in efficiency. To address this efficiency-accuracy trade-off, we propose SSR (Speculative Parallel Scaling Reasoning), a training-free framework that leverages a key insight: by introducing speculative decoding at the step level, we can accelerate reasoning without sacrificing correctness. SSR integrates two components: a Selective Parallel Module (SPM) that identifies a small set of promising reasoning strategies via model-internal scoring, and Step-level Speculative Decoding (SSD), which enables efficient draft-target collaboration for fine-grained reasoning acceleration. Experiments on three mathematical benchmarks-AIME 2024, MATH-500, and LiveMathBench - demonstrate that SSR achieves strong gains over baselines. For instance, on LiveMathBench, SSR improves pass@1 accuracy by 13.84% while reducing computation to 80.5% of the baseline FLOPs. On MATH-500, SSR reduces compute to only 30% with no loss in accuracy.

View on arXiv
@article{chu2025_2505.15340,
  title={ SSR: Speculative Parallel Scaling Reasoning in Test-time },
  author={ Yuanlin Chu and Bo Wang and Xiang Liu and Hong Chen and Aiwei Liu and Xuming Hu },
  journal={arXiv preprint arXiv:2505.15340},
  year={ 2025 }
}
Comments on this paper