ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.01344
30
3

Tamed Warping Network for High-Resolution Semantic Video Segmentation

4 May 2020
Songyuan Li
Junyi Feng
Xi Li
    VOS
ArXivPDFHTML
Abstract

Recent approaches for fast semantic video segmentation have reduced redundancy by warping feature maps across adjacent frames, greatly speeding up the inference phase. However, the accuracy drops seriously owing to the errors incurred by warping. In this paper, we propose a novel framework and design a simple and effective correction stage after warping. Specifically, we build a non-key-frame CNN, fusing warped context features with current spatial details. Based on the feature fusion, our Context Feature Rectification~(CFR) module learns the model's difference from a per-frame model to correct the warped features. Furthermore, our Residual-Guided Attention~(RGA) module utilizes the residual maps in the compressed domain to help CRF focus on error-prone regions. Results on Cityscapes show that the accuracy significantly increases from 67.3%67.3\%67.3% to 71.6%71.6\%71.6%, and the speed edges down from 65.565.565.5 FPS to 61.861.861.8 FPS at a resolution of 1024×20481024\times 20481024×2048. For non-rigid categories, e.g., ``human'' and ``object'', the improvements are even higher than 18 percentage points.

View on arXiv
Comments on this paper