To enhance the interpretability of multimodal unified representations, many studies have focused on discrete unified representations. These efforts typically start with contrastive learning and gradually extend to the disentanglement of modal information, achieving solid multimodal discrete unified representations. However, existing research often overlooks two critical issues: 1) The use of Euclidean distance for quantization in discrete representations often overlooks the important distinctions among different dimensions of features, resulting in redundant representations after quantization; 2) Different modalities have unique characteristics, and a uniform alignment approach does not fully exploit these traits. To address these issues, we propose Training-free Optimization of Codebook (TOC) and Fine and Coarse cross-modal Information Disentangling (FCID). These methods refine the unified discrete representations from pretraining and perform fine- and coarse-grained information disentanglement tailored to the specific characteristics of each modality, achieving significant performance improvements over previous state-of-the-art models.
View on arXiv@article{huang2025_2403.05168, title={ Enhancing Multimodal Unified Representations for Cross Modal Generalization }, author={ Hai Huang and Yan Xia and Shengpeng Ji and Shulei Wang and Hanting Wang and Minghui Fang and Jieming Zhu and Zhenhua Dong and Sashuai Zhou and Zhou Zhao }, journal={arXiv preprint arXiv:2403.05168}, year={ 2025 } }