ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.09498
34
2

OT-VP: Optimal Transport-guided Visual Prompting for Test-Time Adaptation

12 June 2024
Yunbei Zhang
Akshay Mehra
Jihun Hamm
    VLM
ArXivPDFHTML
Abstract

While Vision Transformers (ViTs) have demonstrated remarkable capabilities in learning representations, their performance is compromised when applied to unseen domains. Previous methods either engage in prompt learning during the training phase or modify model parameters at test time through entropy minimization. The former often overlooks unlabeled target data, while the latter doesn't fully address domain shifts. In this work, our approach, Optimal Transport-guided Test-Time Visual Prompting (OT-VP), handles these problems by leveraging prompt learning at test time to align the target and source domains without accessing the training process or altering pre-trained model parameters. This method involves learning a universal visual prompt for the target domain by optimizing the Optimal Transport distance. With just four prompt tokens learned, OT-VP achieves a 5.0%5.0\%5.0% and 1.5%1.5\%1.5% increase in averaged accuracy across single-source and multi-source settings on three benchmark datasets, which is 1.2×1.2\times1.2× and 1.5×1.5\times1.5× the improvement of the state-of-the-art method, respectively.

View on arXiv
Comments on this paper