2
0

APG-MOS: Auditory Perception Guided-MOS Predictor for Synthetic Speech

Zhicheng Lian
Lizhi Wang
Hua Huang
Abstract

Automatic speech quality assessment aims to quantify subjective human perception of speech through computational models to reduce the need for labor-consuming manual evaluations. While models based on deep learning have achieved progress in predicting mean opinion scores (MOS) to assess synthetic speech, the neglect of fundamental auditory perception mechanisms limits consistency with human judgments. To address this issue, we propose an auditory perception guided-MOS prediction model (APG-MOS) that synergistically integrates auditory modeling with semantic analysis to enhance consistency with human judgments. Specifically, we first design a perceptual module, grounded in biological auditory mechanisms, to simulate cochlear functions, which encodes acoustic signals into biologically aligned electrochemical representations. Secondly, we propose a residual vector quantization (RVQ)-based semantic distortion modeling method to quantify the degradation of speech quality at the semantic level. Finally, we design a residual cross-attention architecture, coupled with a progressive learning strategy, to enable multimodal fusion of encoded electrochemical signals and semantic representations. Experiments demonstrate that APG-MOS achieves superior performance on two primary benchmarks. Our code and checkpoint will be available on a public repository upon publication.

View on arXiv
@article{lian2025_2504.20447,
  title={ APG-MOS: Auditory Perception Guided-MOS Predictor for Synthetic Speech },
  author={ Zhicheng Lian and Lizhi Wang and Hua Huang },
  journal={arXiv preprint arXiv:2504.20447},
  year={ 2025 }
}
Comments on this paper