MamFusion: Multi-Mamba with Temporal Fusion for Partially Relevant Video Retrieval

Partially Relevant Video Retrieval (PRVR) is a challenging task in the domain of multimedia retrieval. It is designed to identify and retrieve untrimmed videos that are partially relevant to the provided query. In this work, we investigate long-sequence video content understanding to address information redundancy issues. Leveraging the outstanding long-term state space modeling capability and linear scalability of the Mamba module, we introduce a multi-Mamba module with temporal fusion framework (MamFusion) tailored for PRVR task. This framework effectively captures the state-relatedness in long-term video content and seamlessly integrates it into text-video relevance understanding, thereby enhancing the retrieval process. Specifically, we introduce Temporal T-to-V Fusion and Temporal V-to-T Fusion to explicitly model temporal relationships between text queries and video moments, improving contextual awareness and retrieval accuracy. Extensive experiments conducted on large-scale datasets demonstrate that MamFusion achieves state-of-the-art performance in retrieval effectiveness. Code is available at the link:this https URL.
View on arXiv@article{ying2025_2506.03473, title={ MamFusion: Multi-Mamba with Temporal Fusion for Partially Relevant Video Retrieval }, author={ Xinru Ying and Jiaqi Mo and Jingyang Lin and Canghong Jin and Fangfang Wang and Lina Wei }, journal={arXiv preprint arXiv:2506.03473}, year={ 2025 } }