55
0

A Study on Speech Assessment with Visual Cues

Abstract

Non-intrusive assessment of speech quality and intelligibility is essential when clean reference signals are unavailable. In this work, we propose a multimodal framework that integrates audio features and visual cues to predict PESQ and STOI scores. It employs a dual-branch architecture, where spectral features are extracted using STFT, and visual embeddings are obtained via a visual encoder. These features are then fused and processed by a CNN-BLSTM with attention, followed by multi-task learning to simultaneously predict PESQ and STOI. Evaluations on the LRS3-TED dataset, augmented with noise from the DEMAND corpus, show that our model outperforms the audio-only baseline. Under seen noise conditions, it improves LCC by 9.61% (0.8397->0.9205) for PESQ and 11.47% (0.7403->0.8253) for STOI. These results highlight the effectiveness of incorporating visual cues in enhancing the accuracy of non-intrusive speech assessment.

View on arXiv
@article{ahmed2025_2506.09549,
  title={ A Study on Speech Assessment with Visual Cues },
  author={ Shafique Ahmed and Ryandhimas E. Zezario and Nasir Saleem and Amir Hussain and Hsin-Min Wang and Yu Tsao },
  journal={arXiv preprint arXiv:2506.09549},
  year={ 2025 }
}
Main:3 Pages
5 Figures
Bibliography:2 Pages
2 Tables
Comments on this paper