17
0

LipDiffuser: Lip-to-Speech Generation with Conditional Diffusion Models

Abstract

We present LipDiffuser, a conditional diffusion model for lip-to-speech generation synthesizing natural and intelligible speech directly from silent video recordings. Our approach leverages the magnitude-preserving ablated diffusion model (MP-ADM) architecture as a denoiser model. To effectively condition the model, we incorporate visual features using magnitude-preserving feature-wise linear modulation (MP-FiLM) alongside speaker embeddings. A neural vocoder then reconstructs the speech waveform from the generated mel-spectrograms. Evaluations on LRS3 and TCD-TIMIT demonstrate that LipDiffuser outperforms existing lip-to-speech baselines in perceptual speech quality and speaker similarity, while remaining competitive in downstream automatic speech recognition (ASR). These findings are also supported by a formal listening experiment. Extensive ablation studies and cross-dataset evaluation confirm the effectiveness and generalization capabilities of our approach.

View on arXiv
@article{oliveira2025_2505.11391,
  title={ LipDiffuser: Lip-to-Speech Generation with Conditional Diffusion Models },
  author={ Danilo de Oliveira and Julius Richter and Tal Peer and Timo Germann },
  journal={arXiv preprint arXiv:2505.11391},
  year={ 2025 }
}
Comments on this paper