89
1

SAM2-LOVE: Segment Anything Model 2 in Language-aided Audio-Visual Scenes

Main:8 Pages
5 Figures
Bibliography:2 Pages
6 Tables
Abstract

Reference Audio-Visual Segmentation (Ref-AVS) aims to provide a pixel-wise scene understanding in Language-aided Audio-Visual Scenes (LAVS). This task requires the model to continuously segment objects referred to by text and audio from a video. Previous dual-modality methods always fail due to the lack of a third modality and the existing triple-modality method struggles with spatio-temporal consistency, leading to the target shift of different frames. In this work, we introduce a novel framework, termed SAM2-LOVE, which integrates textual, audio, and visual representations into a learnable token to prompt and align SAM2 for achieving Ref-AVS in the LAVS. Technically, our approach includes a multimodal fusion module aimed at improving multimodal understanding of SAM2, as well as token propagation and accumulation strategies designed to enhance spatio-temporal consistency without forgetting historical information. We conducted extensive experiments to demonstrate that SAM2-LOVE outperforms the SOTA by 8.5\% in J&F\mathcal{J\&F} on the Ref-AVS benchmark and showcase the simplicity and effectiveness of the components. Our code will be available here.

View on arXiv
@article{wang2025_2506.01558,
  title={ SAM2-LOVE: Segment Anything Model 2 in Language-aided Audio-Visual Scenes },
  author={ Yuji Wang and Haoran Xu and Yong Liu and Jiaze Li and Yansong Tang },
  journal={arXiv preprint arXiv:2506.01558},
  year={ 2025 }
}
Comments on this paper