35
0

Plug-and-Play Co-Occurring Face Attention for Robust Audio-Visual Speaker Extraction

Main:4 Pages
2 Figures
Bibliography:1 Pages
10 Tables
Abstract

Audio-visual speaker extraction isolates a target speaker's speech from a mixture speech signal conditioned on a visual cue, typically using the target speaker's face recording. However, in real-world scenarios, other co-occurring faces are often present on-screen, providing valuable speaker activity cues in the scene. In this work, we introduce a plug-and-play inter-speaker attention module to process these flexible numbers of co-occurring faces, allowing for more accurate speaker extraction in complex multi-person environments. We integrate our module into two prominent models: the AV-DPRNN and the state-of-the-art AV-TFGridNet. Extensive experiments on diverse datasets, including the highly overlapped VoxCeleb2 and sparsely overlapped MISP, demonstrate that our approach consistently outperforms baselines. Furthermore, cross-dataset evaluations on LRS2 and LRS3 confirm the robustness and generalizability of our method.

View on arXiv
@article{pan2025_2505.20635,
  title={ Plug-and-Play Co-Occurring Face Attention for Robust Audio-Visual Speaker Extraction },
  author={ Zexu Pan and Shengkui Zhao and Tingting Wang and Kun Zhou and Yukun Ma and Chong Zhang and Bin Ma },
  journal={arXiv preprint arXiv:2505.20635},
  year={ 2025 }
}
Comments on this paper