Semantic Video Segmentation by Gated Recurrent Flow Propagation

Semantic video segmentation is challenging due to the sheer amount of data that needs to be processed and labeled in order to construct accurate models. In this paper we present a deep, end-to-end trainable methodology to video segmentation that is capable of leveraging information present in unlabeled data in order to improve semantic estimates. Our model combines a convolutional architecture and a spatial transformer recurrent layer that are able to temporally propagate labeling information by means of optical flow, adaptively gated based on its locally estimated uncertainty. The flow, the recogition and the gated propagation modules can be trained jointly, end-to-end. The gated recurrent flow propagation component of our model can be plugged-in any static semantic segmentation architecture and turn it into a weakly supervised video processing one. Our extensive experiments in the challenging CityScapes dataset indicate that the resulting model can leverage unlabeled temporal frames next to a labeled one in order to improve both the video segmentation accuracy and the consistency of its temporal labeling, at no additional annotation cost.
View on arXiv