FedOne: Query-Efficient Federated Learning for Black-box Discrete Prompt Learning
- FedML

Black-Box Discrete Prompt Learning is a prompt-tuning method that optimizes discrete prompts without accessing model parameters or gradients, making the prompt tuning on a cloud-based Large Language Model (LLM) feasible. Adapting federated learning to BDPL could further enhance prompt tuning performance by leveraging data from diverse sources. However, all previous research on federated black-box prompt tuning had neglected the substantial query cost associated with the cloud-based LLM service. To address this gap, we conducted a theoretical analysis of query efficiency within the context of federated black-box prompt tuning. Our findings revealed that degrading FedAvg to activate only one client per round, a strategy we called \textit{FedOne}, enabled optimal query efficiency in federated black-box prompt learning. Building on this insight, we proposed the FedOne framework, a federated black-box discrete prompt learning method designed to maximize query efficiency when interacting with cloud-based LLMs. We conducted numerical experiments on various aspects of our framework, demonstrating a significant improvement in query efficiency, which aligns with our theoretical results.
View on arXiv@article{wang2025_2506.14929, title={ FedOne: Query-Efficient Federated Learning for Black-box Discrete Prompt Learning }, author={ Ganyu Wang and Jinjie Fang and Maxwell J. Ying and Bin Gu and Xi Chen and Boyu Wang and Charles Ling }, journal={arXiv preprint arXiv:2506.14929}, year={ 2025 } }