149

Domain Translation via Latent Space Mapping

IEEE International Joint Conference on Neural Network (IJCNN), 2022
Main:15 Pages
21 Figures
Bibliography:3 Pages
7 Tables
Appendix:12 Pages
Abstract

In this paper, we investigate the problem of multi-domain translation: given an element aa of domain AA, we would like to generate a corresponding bb sample in another domain BB, and vice versa. Acquiring supervision in multiple domains can be a tedious task, also we propose to learn this translation from one domain to another when supervision is available as a pair (a,b)A×B(a,b)\sim A\times B and leveraging possible unpaired data when only aAa\sim A or only bBb\sim B is available. We introduce a new unified framework called Latent Space Mapping (\model) that exploits the manifold assumption in order to learn, from each domain, a latent space. Unlike existing approaches, we propose to further regularize each latent space using available domains by learning each dependency between pairs of domains. We evaluate our approach in three tasks performing i) synthetic dataset with image translation, ii) real-world task of semantic segmentation for medical images, and iii) real-world task of facial landmark detection.

View on arXiv
Comments on this paper