104
0

Comparative Analysis of Fast and High-Fidelity Neural Vocoders for Low-Latency Streaming Synthesis in Resource-Constrained Environments

Main:4 Pages
4 Figures
Bibliography:1 Pages
2 Tables
Abstract

In real-time speech synthesis, neural vocoders often require low-latency synthesis through causal processing and streaming. However, streaming introduces inefficiencies absent in batch synthesis, such as limited parallelism, inter-frame dependency management, and parameter loading overhead. This paper proposes multi-stream Wavehax (MS-Wavehax), an efficient neural vocoder for low-latency streaming, by extending the aliasing-free neural vocoder Wavehax with multi-stream decomposition. We analyze the latency-throughput trade-off in a CPU-only environment and identify key bottlenecks in streaming neural vocoders. Our findings provide practical insights for optimizing chunk sizes and designing vocoders tailored to specific application demands and hardware constraints. Furthermore, our subjective evaluations show that MS-Wavehax delivers high speech quality under causal and non-causal conditions while being remarkably compact and easily deployable in resource-constrained environments.

View on arXiv
@article{yoneyama2025_2506.03554,
  title={ Comparative Analysis of Fast and High-Fidelity Neural Vocoders for Low-Latency Streaming Synthesis in Resource-Constrained Environments },
  author={ Reo Yoneyama and Masaya Kawamura and Ryo Terashima and Ryuichi Yamamoto and Tomoki Toda },
  journal={arXiv preprint arXiv:2506.03554},
  year={ 2025 }
}
Comments on this paper