138
0

Accelerating Diffusion Large Language Models with SlowFast: The Three Golden Principles

Main:9 Pages
6 Figures
Bibliography:2 Pages
4 Tables
Abstract

Diffusion-based language models (dLLMs) have emerged as a promising alternative to traditional autoregressive LLMs by enabling parallel token generation and significantly reducing inference latency. However, existing sampling strategies for dLLMs, such as confidence-based or semi-autoregressive decoding, often suffer from static behavior, leading to suboptimal efficiency and limited flexibility. In this paper, we propose SlowFast Sampling, a novel dynamic sampling strategy that adaptively alternates between exploratory and accelerated decoding stages. Our method is guided by three golden principles: certainty principle, convergence principle, and positional principle, which govern when and where tokens can be confidently and efficiently decoded. We further integrate our strategy with dLLM-Cache to reduce redundant computation. Extensive experiments across benchmarks and models show that SlowFast Sampling achieves up to 15.63×\times speedup on LLaDA with minimal accuracy drop, and up to 34.22×\times when combined with caching. Notably, our approach outperforms strong autoregressive baselines like LLaMA3 8B in throughput, demonstrating that well-designed sampling can unlock the full potential of dLLMs for fast and high-quality generation.

View on arXiv
@article{wei2025_2506.10848,
  title={ Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles },
  author={ Qingyan Wei and Yaojie Zhang and Zhiyuan Liu and Dongrui Liu and Linfeng Zhang },
  journal={arXiv preprint arXiv:2506.10848},
  year={ 2025 }
}
Comments on this paper