ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.05168
29
6

Enhancing Multimodal Unified Representations for Cross Modal Generalization

8 March 2024
Hai Huang
Yan Xia
Shengpeng Ji
Shulei Wang
Hanting Wang
Minghui Fang
Jieming Zhu
Zhenhua Dong
Sashuai Zhou
Zhou Zhao
ArXivPDFHTML
Abstract

To enhance the interpretability of multimodal unified representations, many studies have focused on discrete unified representations. These efforts typically start with contrastive learning and gradually extend to the disentanglement of modal information, achieving solid multimodal discrete unified representations. However, existing research often overlooks two critical issues: 1) The use of Euclidean distance for quantization in discrete representations often overlooks the important distinctions among different dimensions of features, resulting in redundant representations after quantization; 2) Different modalities have unique characteristics, and a uniform alignment approach does not fully exploit these traits. To address these issues, we propose Training-free Optimization of Codebook (TOC) and Fine and Coarse cross-modal Information Disentangling (FCID). These methods refine the unified discrete representations from pretraining and perform fine- and coarse-grained information disentanglement tailored to the specific characteristics of each modality, achieving significant performance improvements over previous state-of-the-art models.

View on arXiv
@article{huang2025_2403.05168,
  title={ Enhancing Multimodal Unified Representations for Cross Modal Generalization },
  author={ Hai Huang and Yan Xia and Shengpeng Ji and Shulei Wang and Hanting Wang and Minghui Fang and Jieming Zhu and Zhenhua Dong and Sashuai Zhou and Zhou Zhao },
  journal={arXiv preprint arXiv:2403.05168},
  year={ 2025 }
}
Comments on this paper