ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10289
13
0

MSCI: Addressing CLIP's Inherent Limitations for Compositional Zero-Shot Learning

15 May 2025
Yixuan Wang
Shuai Xu
Xuelin Zhu
Yongqian Li
    VLM
ArXivPDFHTML
Abstract

Compositional Zero-Shot Learning (CZSL) aims to recognize unseen state-object combinations by leveraging known combinations. Existing studies basically rely on the cross-modal alignment capabilities of CLIP but tend to overlook its limitations in capturing fine-grained local features, which arise from its architectural and training paradigm. To address this issue, we propose a Multi-Stage Cross-modal Interaction (MSCI) model that effectively explores and utilizes intermediate-layer information from CLIP's visual encoder. Specifically, we design two self-adaptive aggregators to extract local information from low-level visual features and integrate global information from high-level visual features, respectively. These key information are progressively incorporated into textual representations through a stage-by-stage interaction mechanism, significantly enhancing the model's perception capability for fine-grained local visual information. Additionally, MSCI dynamically adjusts the attention weights between global and local visual information based on different combinations, as well as different elements within the same combination, allowing it to flexibly adapt to diverse scenarios. Experiments on three widely used datasets fully validate the effectiveness and superiority of the proposed model. Data and code are available atthis https URL.

View on arXiv
@article{wang2025_2505.10289,
  title={ MSCI: Addressing CLIP's Inherent Limitations for Compositional Zero-Shot Learning },
  author={ Yue Wang and Shuai Xu and Xuelin Zhu and Yicong Li },
  journal={arXiv preprint arXiv:2505.10289},
  year={ 2025 }
}
Comments on this paper