7
0

Speculative Decoding Meets Quantization: Compatibility Evaluation and Hierarchical Framework Design

Abstract

Speculative decoding and quantization effectively accelerate memory-bound inference of large language models. Speculative decoding mitigates the memory bandwidth bottleneck by verifying multiple tokens within a single forward pass, which increases computational effort. Quantization achieves this optimization by compressing weights and activations into lower bit-widths and also reduces computations via low-bit matrix multiplications. To further leverage their strengths, we investigate the integration of these two techniques. Surprisingly, experiments applying the advanced speculative decoding method EAGLE-2 to various quantized models reveal that the memory benefits from 4-bit weight quantization are diminished by the computational load from speculative decoding. Specifically, verifying a tree-style draft incurs significantly more time overhead than a single-token forward pass on 4-bit weight quantized models. This finding led to our new speculative decoding design: a hierarchical framework that employs a small model as an intermediate stage to turn tree-style drafts into sequence drafts, leveraging the memory access benefits of the target quantized model. Experimental results show that our hierarchical approach achieves a 2.78×\times speedup across various tasks for the 4-bit weight Llama-3-70B model on an A100 GPU, outperforming EAGLE-2 by 1.31×\times. Code available atthis https URL.

View on arXiv
@article{zhang2025_2505.22179,
  title={ Speculative Decoding Meets Quantization: Compatibility Evaluation and Hierarchical Framework Design },
  author={ Yudi Zhang and Weilin Zhao and Xu Han and Tiejun Zhao and Wang Xu and Hailong Cao and Conghui Zhu },
  journal={arXiv preprint arXiv:2505.22179},
  year={ 2025 }
}
Comments on this paper