UniMoCo: Unified Modality Completion for Robust Multi-Modal Embeddings

Current research has explored vision-language models for multi-modal embedding tasks, such as information retrieval, visual grounding, and classification. However, real-world scenarios often involve diverse modality combinations between queries and targets, such as text and image to text, text and image to text and image, and text to text and image. These diverse combinations pose significant challenges for existing models, as they struggle to align all modality combinations within a unified embedding space during training, which degrades performance at inference. To address this limitation, we propose UniMoCo, a novel vision-language model architecture designed for multi-modal embedding tasks. UniMoCo introduces a modality-completion module that generates visual features from textual inputs, ensuring modality completeness for both queries and targets. Additionally, we develop a specialized training strategy to align embeddings from both original and modality-completed inputs, ensuring consistency within the embedding space. This enables the model to robustly handle a wide range of modality combinations across embedding tasks. Experiments show that UniMoCo outperforms previous methods while demonstrating consistent robustness across diverse settings. More importantly, we identify and quantify the inherent bias in conventional approaches caused by imbalance of modality combinations in training data, which can be mitigated through our modality-completion paradigm. The code is available atthis https URL.
View on arXiv@article{qin2025_2505.11815, title={ UniMoCo: Unified Modality Completion for Robust Multi-Modal Embeddings }, author={ Jiajun Qin and Yuan Pu and Zhuolun He and Seunggeun Kim and David Z. Pan and Bei Yu }, journal={arXiv preprint arXiv:2505.11815}, year={ 2025 } }