24
0

Learning Speaker-Invariant Visual Features for Lipreading

Main:7 Pages
8 Figures
Bibliography:2 Pages
3 Tables
Abstract

Lipreading is a challenging cross-modal task that aims to convert visual lip movements into spoken text. Existing lipreading methods often extract visual features that include speaker-specific lip attributes (e.g., shape, color, texture), which introduce spurious correlations between vision and text. These correlations lead to suboptimal lipreading accuracy and restrict model generalization. To address this challenge, we introduce SIFLip, a speaker-invariant visual feature learning framework that disentangles speaker-specific attributes using two complementary disentanglement modules (Implicit Disentanglement and Explicit Disentanglement) to improve generalization. Specifically, since different speakers exhibit semantic consistency between lip movements and phonetic text when pronouncing the same words, our implicit disentanglement module leverages stable text embeddings as supervisory signals to learn common visual representations across speakers, implicitly decoupling speaker-specific features. Additionally, we design a speaker recognition sub-task within the main lipreading pipeline to filter speaker-specific features, then further explicitly disentangle these personalized visual features from the backbone network via gradient reversal. Experimental results demonstrate that SIFLip significantly enhances generalization performance across multiple public datasets. Experimental results demonstrate that SIFLip significantly improves generalization performance across multiple public datasets, outperforming state-of-the-art methods.

View on arXiv
@article{li2025_2506.07572,
  title={ Learning Speaker-Invariant Visual Features for Lipreading },
  author={ Yu Li and Feng Xue and Shujie Li and Jinrui Zhang and Shuang Yang and Dan Guo and Richang Hong },
  journal={arXiv preprint arXiv:2506.07572},
  year={ 2025 }
}
Comments on this paper