Generating rich manual annotations for an image dataset is a crucial limit of the current state of the art in object localization and detection. This paper introduces self-taught object localization, a novel approach that leverages on deep convolutional networks trained for whole-image recognition to localize objects in images without additional human supervision, i.e., without using any ground-truth bounding boxes for training. The key idea is to analyze the change in the recognition scores when artificially graying out different regions of the image. We observe that graying out a region that contains an object typically causes a significant drop in recognition. This intuition is embedded into an agglomerative clustering technique that generates self-taught localization hypotheses. For a small number of hypotheses, our object localization scheme greatly outperforms prior subwindow proposal methods in terms of both recall and precision. Our experiments on a challenging dataset of 200 classes indicate that our automatically-generated hypotheses can be used to train object detectors in a weakly-supervised fashion with recognition results remarkably close to those obtained by training on manually annotated bounding boxes.
View on arXiv