Deep learning generates state-of-the-art semantic segmentation provided that a large number of images together with pixel-wise annotations are available. To alleviate the expensive data collection process, we propose a semi-supervised domain adaptation method for the specific case of images with similar semantic content but different pixel distributions. A network trained with supervision on a past dataset is finetuned on the new dataset to conserve its features maps. The domain adaptation becomes a simple regression between feature maps and does not require annotations on the new dataset. This method reaches performances similar to classic transfer learning on the PASCAL VOC dataset with synthetic transformations.
View on arXiv