Vision-Integrated High-Quality Neural Speech Coding

This paper proposes a novel vision-integrated neural speech codec (VNSC), which aims to enhance speech coding quality by leveraging visual modality information. In VNSC, the image analysis-synthesis module extracts visual features from lip images, while the feature fusion module facilitates interaction between the image analysis-synthesis module and the speech coding module, transmitting visual information to assist the speech coding process. Depending on whether visual information is available during the inference stage, the feature fusion module integrates visual features into the speech coding module using either explicit integration or implicit distillation strategies. Experimental results confirm that integrating visual information effectively improves the quality of the decoded speech and enhances the noise robustness of the neural speech codec, without increasing the bitrate.
View on arXiv@article{guo2025_2505.23379, title={ Vision-Integrated High-Quality Neural Speech Coding }, author={ Yao Guo and Yang Ai and Rui-Chen Zheng and Hui-Peng Du and Xiao-Hang Jiang and Zhen-Hua Ling }, journal={arXiv preprint arXiv:2505.23379}, year={ 2025 } }