151
0

Find First, Track Next: Decoupling Identification and Propagation in Referring Video Object Segmentation

Abstract

Referring video object segmentation aims to segment and track a target object in a video using a natural language prompt. Existing methods typically fuse visual and textual features in a highly entangled manner, processing multi-modal information together to generate per-frame masks. However, this approach often struggles with ambiguous target identification, particularly in scenes with multiple similar objects, and fails to ensure consistent mask propagation across frames. To address these limitations, we introduce FindTrack, a novel decoupled framework that separates target identification from mask propagation. FindTrack first adaptively selects a key frame by balancing segmentation confidence and vision-text alignment, establishing a robust reference for the target object. This reference is then utilized by a dedicated propagation module to track and segment the object across the entire video. By decoupling these processes, FindTrack effectively reduces ambiguities in target association and enhances segmentation consistency. We demonstrate that FindTrack outperforms existing methods on public benchmarks.

View on arXiv
@article{cho2025_2503.03492,
  title={ Find First, Track Next: Decoupling Identification and Propagation in Referring Video Object Segmentation },
  author={ Suhwan Cho and Seunghoon Lee and Minhyeok Lee and Jungho Lee and Sangyoun Lee },
  journal={arXiv preprint arXiv:2503.03492},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.