Cross-Lingual Representation Alignment Through Contrastive Image-Caption Tuning

Abstract
Multilingual alignment of sentence representations has mostly required bitexts to bridge the gap between languages. We investigate whether visual information can bridge this gap instead. Image caption datasets are very easy to create without requiring multilingual expertise, so this offers a more efficient alternative for low-resource languages. We find that multilingual image-caption alignment can implicitly align the text representations between languages, languages unseen by the encoder in pretraining can be incorporated into this alignment post-hoc, and these aligned representations are usable for cross-lingual Natural Language Understanding (NLU) and bitext retrieval.
View on arXiv@article{krasner2025_2505.13628, title={ Cross-Lingual Representation Alignment Through Contrastive Image-Caption Tuning }, author={ Nathaniel Krasner and Nicholas Lanuzo and Antonios Anastasopoulos }, journal={arXiv preprint arXiv:2505.13628}, year={ 2025 } }
Comments on this paper