Learning Shared Representations from Unpaired Data

Learning shared representations is a primary area of multimodal representation learning. The current approaches to achieve a shared embedding space rely heavily on paired samples from each modality, which are significantly harder to obtain than unpaired ones. In this work, we demonstrate that shared representations can be learned almost exclusively from unpaired data. Our arguments are grounded in the spectral embeddings of the random walk matrices constructed independently from each unimodal representation. Empirical results in computer vision and natural language processing domains support its potential, revealing the effectiveness of unpaired data in capturing meaningful cross-modal relations, demonstrating high capabilities in retrieval tasks, generation, arithmetics, zero-shot, and cross-domain classification. This work, to the best of our knowledge, is the first to demonstrate these capabilities almost exclusively from unpaired samples, giving rise to a cross-modal embedding that could be viewed as universal, i.e., independent of the specific modalities of the data. Our code IS publicly available atthis https URL.
View on arXiv@article{yacobi2025_2505.21524, title={ Learning Shared Representations from Unpaired Data }, author={ Amitai Yacobi and Nir Ben-Ari and Ronen Talmon and Uri Shaham }, journal={arXiv preprint arXiv:2505.21524}, year={ 2025 } }