Zero-Shot Deep Domain Adaptation
- VLM

Current state-of-the-art approaches in domain adaptation and fusion show promising results with either labeled or unlabeled task-relevant target-domain training data. However, the fact that the task-relevant target-domain training data can be unavailable is often ignored by the prior works. To tackle this issue, instead of using the task-relevant target-domain training data, we propose zero-shot deep domain adaptation (ZDDA) which learns the privileged information from the task-irrelevant dual-domain pairs. ZDDA first learns a source-domain representation which is not only suitable for the task of interest but also close to a given general target-domain representation. Afterwards, ZDDA performs domain fusion by simulating the task-relevant target-domain representations with the task-relevant source-domain data. In a scene classification task from the SUN RGB-D dataset, our proposed method outperforms the baselines of domain adaptation and fusion, being the first published domain adaptation and fusion method which needs no task-relevant target-domain training data. We will validate our method on other tasks and/or domains in the follow-up report.
View on arXiv