20
0

Joint Parsing of Cross-view Scenes with Spatio-temporal Semantic Parse Graphs

Abstract

Cross-view video understanding is an important yet under-explored area in computer vision. In this paper, we introduce a joint parsing method that takes view-centric proposals from pre-trained computer vision models and produces spatio-temporal parse graphs that represents a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge segments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework models such correlations and constraints explicitly and generates semantic parse graphs about the scene. Quantitative experiments show that scene-centric predictions in the parse graph outperform view-centric predictions.

View on arXiv
Comments on this paper