Synthetic to Real Adaptation with Deep Generative Correlation Alignment Networks
- OOD

Synthetic images rendered from 3D CAD models have been used in the past to augment training data for object recognition algorithms. However, the generated images are non-photorealistic and do not match real image statistics. This leads to a large domain discrepancy, causing models trained on synthetic data to perform poorly on real domains. Recent work has shown the great potential of deep convolutional neural networks to generate realistic images, but has not addressed synthetic-to-real domain adaptation. Inspired by these ideas, we propose the Deep Generative Correlation Alignment Network (DGCAN) to synthesize training images using a novel domain adaption algorithm. DGCAN leverages the L2 and the correlation alignment (CORAL) losses to minimize the domain discrepancy between generated and real images in deep feature space. The rendered results demonstrate that DGCAN can synthesize the object shape from 3D CAD models together with structured texture from a small amount of real background images. Experimentally, we show that training classifiers on the generated data can significantly boost performance when testing on the real image domain, improving upon several existing methods.
View on arXiv