40
0

AuralSAM2: Enabling SAM2 Hear Through Pyramid Audio-Visual Feature Prompting

Main:8 Pages
14 Figures
Bibliography:3 Pages
7 Tables
Appendix:7 Pages
Abstract

Segment Anything Model 2 (SAM2) exhibits strong generalisation for promptable segmentation in video clips; however, its integration with the audio modality remains underexplored. Existing approaches mainly follow two directions: (1) injecting adapters into the image encoder to receive audio signals, which incurs efficiency costs during prompt engineering, and (2) leveraging additional foundation models to generate visual prompts for the sounding objects, which are often imprecisely localised, leading to misguidance in SAM2. Moreover, these methods overlook the rich semantic interplay between hierarchical visual features and other modalities, resulting in suboptimal cross-modal fusion. In this work, we propose AuralSAM2, comprising the novel AuralFuser module, which externally attaches to SAM2 to integrate features from different modalities and generate feature-level prompts, guiding SAM2's decoder in segmenting sounding targets. Such integration is facilitated by a feature pyramid, further refining semantic understanding and enhancing object awareness in multimodal scenarios. Additionally, the audio-guided contrastive learning is introduced to explicitly align audio and visual representations and to also mitigate biases caused by dominant visual patterns. Results on public benchmarks show that our approach achieves remarkable improvements over the previous methods in the field. Code is available atthis https URL.

View on arXiv
@article{liu2025_2506.01015,
  title={ AuralSAM2: Enabling SAM2 Hear Through Pyramid Audio-Visual Feature Prompting },
  author={ Yuyuan Liu and Yuanhong Chen and Chong Wang and Junlin Han and Junde Wu and Can Peng and Jingkun Chen and Yu Tian and Gustavo Carneiro },
  journal={arXiv preprint arXiv:2506.01015},
  year={ 2025 }
}
Comments on this paper