Domain Agnostic Prototypical Distribution for Unsupervised Model
Adaptation
- CLLOffRL
We develop an algorithm for adaptation of a classifier from a labeled source domain to an unlabeled target domain in a sequential learning setting. This problem has been studied extensively in unsupervised domain adaptation (UDA) literature but the existing UDA methods consider a joint learning setting where the model is trained on the source domain and the target domain data simultaneously. We consider a more practical setting, where the model has been trained on the labeled source domain data and then needs to be adapted to the unlabeled target domain, without having access to the source domain training data. We tackle this problem by aligning the distributions of the source and the target domain in a discriminative embedding space. To overcome the challenges of learning in a sequential setting, we learn an intermediate prototypical distribution from the source labeled data and then use this distribution for knowledge transfer to the target domain. We provide theoretical justification for the proposed algorithm by showing that it optimizes an upper-bound for the expected risk in the target domain. We also conduct extensive experiments on several standard benchmarks and demonstrate the competitiveness of the proposed model adaptation method.
View on arXiv