ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07484
12
0

CoCoA-Mix: Confusion-and-Confidence-Aware Mixture Model for Context Optimization

9 June 2025
Dasol Hong
Wooju Lee
Hyun Myung
ArXiv (abs)PDFHTML
Main:7 Pages
11 Figures
Bibliography:3 Pages
11 Tables
Appendix:12 Pages
Abstract

Prompt tuning, which adapts vision-language models by freezing model parameters and optimizing only the prompt, has proven effective for task-specific adaptations. The core challenge in prompt tuning is improving specialization for a specific task and generalization for unseen domains. However, frozen encoders often produce misaligned features, leading to confusion between classes and limiting specialization. To overcome this issue, we propose a confusion-aware loss (CoA-loss) that improves specialization by refining the decision boundaries between confusing classes. Additionally, we mathematically demonstrate that a mixture model can enhance generalization without compromising specialization. This is achieved using confidence-aware weights (CoA-weights), which adjust the weights of each prediction in the mixture model based on its confidence within the class domains. Extensive experiments show that CoCoA-Mix, a mixture model with CoA-loss and CoA-weights, outperforms state-of-the-art methods by enhancing specialization and generalization. Our code is publicly available atthis https URL.

View on arXiv
@article{hong2025_2506.07484,
  title={ CoCoA-Mix: Confusion-and-Confidence-Aware Mixture Model for Context Optimization },
  author={ Dasol Hong and Wooju Lee and Hyun Myung },
  journal={arXiv preprint arXiv:2506.07484},
  year={ 2025 }
}
Comments on this paper