12

Speculative Decoding Speed-of-Light: Optimal Lower Bounds via Branching Random Walks

Sergey Pankratov
Dan Alistarh
Main:11 Pages
2 Figures
Bibliography:3 Pages
1 Tables
Abstract

Speculative generation has emerged as a promising technique to accelerate inference in large language models (LLMs) by leveraging parallelism to verify multiple draft tokens simultaneously. However, the fundamental limits on the achievable speedup remain poorly understood. In this work, we establish the first ``tight'' lower bounds on the runtime of any deterministic speculative generation algorithm. This is achieved by drawing a parallel between the token generation process and branching random walks, which allows us to analyze the optimal draft tree selection problem. We prove, under basic assumptions, that the expected number of tokens successfully predicted per speculative iteration is bounded as E[X](μ+μ(2))log(P)/μ2+O(1)\mathbb{E}[X] \leq (\mu + \mu_{(2)})\log(P )/\mu^2 + O(1), where PP is the verifier's capacity, μ\mu is the expected entropy of the verifier's output distribution, and μ(2)\mu_{(2)} is the expected second log-moment. This result provides new insights into the limits of parallel token generation, and could guide the design of future speculative decoding systems. Empirical evaluations on Llama models validate our theoretical predictions, confirming the tightness of our bounds in practical settings.

View on arXiv
Comments on this paper