ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06993
50
0

CAPT: Class-Aware Prompt Tuning for Federated Long-Tailed Learning with Vision-Language Model

10 March 2025
Shihao Hou
Xinyi Shang
Shreyank N Gowda
Yang Lu
Chao-Xiang Wu
Yan Yan
Hanzi Wang
    VLM
ArXivPDFHTML
Abstract

Effectively handling the co-occurrence of non-IID data and long-tailed distributions remains a critical challenge in federated learning. While fine-tuning vision-language models (VLMs) like CLIP has shown to be promising in addressing non-IID data challenges, this approach leads to severe degradation of tail classes in federated long-tailed scenarios. Under the composite effects of strong non-IID data distribution and long-tailed class imbalances, VLM fine-tuning may even fail to yield any improvement. To address this issue, we propose Class-Aware Prompt Learning for Federated Long-tailed Learning (CAPT), a novel framework that leverages a pre-trained VLM to effectively handle both data heterogeneity and long-tailed distributions. CAPT introduces a dual-prompt mechanism that synergizes general and class-aware prompts, enabling the framework to capture global trends while preserving class-specific knowledge. To better aggregate and share knowledge across clients, we introduce a heterogeneity-aware client clustering strategy that groups clients based on their data distributions, enabling efficient collaboration and knowledge sharing. Extensive experiments on various long-tailed datasets with different levels of data heterogeneity demonstrate that CAPT significantly improves tail class performance without compromising overall accuracy, outperforming state-of-the-art methods in federated long-tailed learning scenarios.

View on arXiv
@article{hou2025_2503.06993,
  title={ CAPT: Class-Aware Prompt Tuning for Federated Long-Tailed Learning with Vision-Language Model },
  author={ Shihao Hou and Xinyi Shang and Shreyank N Gowda and Yang Lu and Chao Wu and Yan Yan and Hanzi Wang },
  journal={arXiv preprint arXiv:2503.06993},
  year={ 2025 }
}
Comments on this paper