ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2512.12678
89
0

βββ-CLIP: Text-Conditioned Contrastive Learning for Multi-Granular Vision-Language Alignment

14 December 2025
Fatimah Zohra
Chen Zhao
Hani Itani
Bernard Ghanem
    CLIPVLM
ArXiv (abs)PDFHTMLGithub (3★)
Main:7 Pages
5 Figures
Bibliography:3 Pages
15 Tables
Appendix:6 Pages
Abstract

CLIP achieves strong zero-shot image-text retrieval by aligning global vision and text representations, yet it falls behind on fine-grained tasks even when fine-tuned on long, detailed captions. In this work, we propose β\betaβ-CLIP, a multi-granular text-conditioned contrastive learning framework designed to achieve hierarchical alignment between multiple textual granularities-from full captions to sentences and phrases-and their corresponding visual regions. For each level of granularity, β\betaβ-CLIP utilizes cross-attention to dynamically pool image patches, producing contextualized visual embeddings. To address the semantic overlap inherent in this hierarchy, we introduce the β\betaβ-Contextualized Contrastive Alignment Loss (β\betaβ-CAL). This objective parameterizes the trade-off between strict query-specific matching and relaxed intra-image contextualization, supporting both soft Cross-Entropy and hard Binary Cross-Entropy formulations. Through extensive experiments, we demonstrate that β\betaβ-CLIP significantly improves dense alignment: achieving 91.8% T2I 92.3% I2T at R@1 on Urban1K and 30.9% on FG-OVD (Hard), setting state-of-the-art among methods trained without hard negatives. β\betaβ-CLIP establishes a robust, adaptive baseline for dense vision-language correspondence. The code and models are released atthis https URL.

View on arXiv
Comments on this paper