179
2792

Unsupervised Visual Representation Learning by Context Prediction

Abstract

This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image in the collection and train a discriminative model to predict their relative position within the image. We argue that doing well on this task will require the model to learn about the layout of visual objects and object parts. We demonstrate that the feature representation learned using this within-image context prediction task is indeed able to capture visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned features, when used as pre-training for the R-CNN object detection pipeline, provide a significant boost over random initialization on Pascal object detection, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.

View on arXiv
Comments on this paper