38
1

Uneven Event Modeling for Partially Relevant Video Retrieval

Main:5 Pages
4 Figures
Bibliography:1 Pages
Abstract

Given a text query, partially relevant video retrieval (PRVR) aims to retrieve untrimmed videos containing relevant moments, wherein event modeling is crucial for partitioning the video into smaller temporal events that partially correspond to the text. Previous methods typically segment videos into a fixed number of equal-length clips, resulting in ambiguous event boundaries. Additionally, they rely on mean pooling to compute event representations, inevitably introducing undesired misalignment. To address these, we propose an Uneven Event Modeling (UEM) framework for PRVR. We first introduce the Progressive-Grouped Video Segmentation (PGVS) module, to iteratively formulate events in light of both temporal dependencies and semantic similarity between consecutive frames, enabling clear event boundaries. Furthermore, we also propose the Context-Aware Event Refinement (CAER) module to refine the event representation conditioned the text's cross-attention. This enables event representations to focus on the most relevant frames for a given text, facilitating more precise text-video alignment. Extensive experiments demonstrate that our method achieves state-of-the-art performance on two PRVR benchmarks.

View on arXiv
@article{zhu2025_2506.00891,
  title={ Uneven Event Modeling for Partially Relevant Video Retrieval },
  author={ Sa Zhu and Huashan Chen and Wanqian Zhang and Jinchao Zhang and Zexian Yang and Xiaoshuai Hao and Bo Li },
  journal={arXiv preprint arXiv:2506.00891},
  year={ 2025 }
}
Comments on this paper