ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13723
7
0

OTFusion: Bridging Vision-only and Vision-Language Models via Optimal Transport for Transductive Zero-Shot Learning

16 June 2025
Qiyu Xu
Wenyang Chen
Zhanxuan Hu
Huafeng Li
Yonghang Tai
    VLM
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:3 Pages
2 Tables
Abstract

Transductive zero-shot learning (ZSL) aims to classify unseen categories by leveraging both semantic class descriptions and the distribution of unlabeled test data. While Vision-Language Models (VLMs) such as CLIP excel at aligning visual inputs with textual semantics, they often rely too heavily on class-level priors and fail to capture fine-grained visual cues. In contrast, Vision-only Foundation Models (VFMs) like DINOv2 provide rich perceptual features but lack semantic alignment. To exploit the complementary strengths of these models, we propose OTFusion, a simple yet effective training-free framework that bridges VLMs and VFMs via Optimal Transport. Specifically, OTFusion aims to learn a shared probabilistic representation that aligns visual and semantic information by minimizing the transport cost between their respective distributions. This unified distribution enables coherent class predictions that are both semantically meaningful and visually grounded. Extensive experiments on 11 benchmark datasets demonstrate that OTFusion consistently outperforms the original CLIP model, achieving an average accuracy improvement of nearly 10%10\%10%, all without any fine-tuning or additional annotations. The code will be publicly released after the paper is accepted.

View on arXiv
@article{xu2025_2506.13723,
  title={ OTFusion: Bridging Vision-only and Vision-Language Models via Optimal Transport for Transductive Zero-Shot Learning },
  author={ Qiyu Xu and Wenyang Chen and Zhanxuan Hu and Huafeng Li and Yonghang Tai },
  journal={arXiv preprint arXiv:2506.13723},
  year={ 2025 }
}
Comments on this paper