Diffusion Buffer: Online Diffusion-based Speech Enhancement with Sub-Second Latency

Diffusion models are a class of generative models that have been recently used for speech enhancement with remarkable success but are computationally expensive at inference time. Therefore, these models are impractical for processing streaming data in real-time. In this work, we adapt a sliding window diffusion framework to the speech enhancement task. Our approach progressively corrupts speech signals through time, assigning more noise to frames close to the present in a buffer. This approach outputs denoised frames with a delay proportional to the chosen buffer size, enabling a trade-off between performance and latency. Empirical results demonstrate that our method outperforms standard diffusion models and runs efficiently on a GPU, achieving an input-output latency in the order of 0.3 to 1 seconds. This marks the first practical diffusion-based solution for online speech enhancement.
View on arXiv@article{lay2025_2506.02908, title={ Diffusion Buffer: Online Diffusion-based Speech Enhancement with Sub-Second Latency }, author={ Bunlong Lay and Rostilav Makarov and Timo Gerkmann }, journal={arXiv preprint arXiv:2506.02908}, year={ 2025 } }