Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation
- OOD
Semi-supervised domain adaptation (SSDA) aims to adapt models from a labeled source domain to a different but related target domain, from which unlabeled data and a small set of labeled data are provided. In this paper we propose a new approach for SSDA, which is to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains. We show that these two sub-tasks yield very different classifiers and thus naturally fits into the well established co-training framework, in which the two classifiers exchange their high confident predictions to iteratively "teach each other" so that both classifiers can excel in the target domain. We call our approach Deep Co-Training with Task Decomposition (DeCoTa). DeCoTa requires no adversarial training, making it fairly easy to implement. DeCoTa achieves state-of-the-art results on several SSDA datasets, outperforming the prior art by a notable 4% margin on DomainNet.
View on arXiv