70
2

User-Driven Voice Generation and Editing through Latent Space Navigation

Main:4 Pages
6 Figures
Bibliography:2 Pages
2 Tables
Abstract

This paper presents a user-driven approach for synthesizing highly specific target voices based on user feedback, which is particularly beneficial for speech-impaired individuals who wish to recreate their lost voices but lack prior recordings. Specifically, we leverage the neural analysis and synthesis framework to construct a low-dimensional, yet sufficiently expressive latent speaker embedding space. Within this latent space, we implement a search algorithm that guides users to their desired voice through completing a sequence of straightforward comparison tasks. Both synthetic simulations and real-world user studies demonstrate that the proposed approach can effectively approximate target voices. Moreover, by analyzing the mel-spectrogram generator's Jacobians, we identify a set of meaningful voice editing directions within the latent space. These directions enable users to further fine-tune specific attributes of the generated voice, including the pitch level, pitch range, volume, vocal tension, nasality, and tone color. Audio samples are available at https://myspeechprojects.github.io/voicedesign/.

View on arXiv
@article{tian2025_2408.17068,
  title={ Personalized Voice Synthesis through Human-in-the-Loop Coordinate Descent },
  author={ Yusheng Tian and Junbin Liu and Tan Lee },
  journal={arXiv preprint arXiv:2408.17068},
  year={ 2025 }
}
Comments on this paper