Occluded Video Instance Segmentation
- VOSVLM
Can our video understanding systems perceive objects when a heavy occlusion exists in a scene? To answer this question, we collect a large-scale dataset called OVIS for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. OVIS consists of 296k high-quality instance masks from 25 semantic categories, where object occlusions usually occur. While our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems are not satisfying. On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 14.4, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario. In experiments, a simple plug-and-play module that performs temporal feature calibration is proposed to complement missing object cues caused by occlusion. Built upon MaskTrack R-CNN and SipMask, we obtain an AP of 15.1 and 14.5 on the OVIS dataset and achieve 32.1 and 35.1 on the YouTube-VIS dataset respectively, a remarkable improvement over the state-of-the-art methods. The OVIS dataset is released at http://songbai.site/ovis , and the project code will be available soon.
View on arXiv