71
2

Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data

Abstract

Fine-tuning pre-trained models is a popular approach in machine learning for solving complex tasks with moderate data. However, fine-tuning the entire pre-trained model is ineffective in federated data scenarios where local data distributions are diversely skewed. To address this, we explore integrating federated learning with a more effective prompt-tuning method, optimizing for a small set of input prefixes to reprogram the pre-trained model's behavior. Our approach transforms federated learning into a distributed set modeling task, aggregating diverse sets of prompts to globally fine-tune the pre-trained model. We benchmark various baselines based on direct adaptations of existing federated model aggregation techniques and introduce a new probabilistic prompt aggregation method that substantially outperforms these baselines. Our reported results on a variety of computer vision datasets confirm that the proposed method is most effective to combat extreme data heterogeneity in federated learning.

View on arXiv
@article{weng2025_2502.19752,
  title={ Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data },
  author={ Pei-Yau Weng and Minh Hoang and Lam M. Nguyen and My T. Thai and Tsui-Wei Weng and Trong Nghia Hoang },
  journal={arXiv preprint arXiv:2502.19752},
  year={ 2025 }
}
Comments on this paper