16
0

Incentivize Contribution and Learn Parameters Too: Federated Learning with Strategic Data Owners

Abstract

Classical federated learning (FL) assumes that the clients have a limited amount of noisy data with which they voluntarily participate and contribute towards learning a global, more accurate model in a principled manner. The learning happens in a distributed fashion without sharing the data with the center. However, these methods do not consider the incentive of an agent for participating and contributing to the process, given that data collection and running a distributed algorithm is costly for the clients. The question of rationality of contribution has been asked recently in the literature and some results exist that consider this problem. This paper addresses the question of simultaneous parameter learning and incentivizing contribution, which distinguishes it from the extant literature. Our first mechanism incentivizes each client to contribute to the FL process at a Nash equilibrium and simultaneously learn the model parameters. However, this equilibrium outcome can be away from the optimal, where clients contribute with their full data and the algorithm learns the optimal parameters. We propose a second mechanism with monetary transfers that is budget balanced and enables the full data contribution along with optimal parameter learning. Large scale experiments with real (federated) datasets (CIFAR-10, FeMNIST, and Twitter) show that these algorithms converge quite fast in practice, yield good welfare guarantees, and better model performance for all agents.

View on arXiv
@article{doshi2025_2505.12010,
  title={ Incentivize Contribution and Learn Parameters Too: Federated Learning with Strategic Data Owners },
  author={ Drashthi Doshi and Aditya Vema Reddy Kesari and Swaprava Nath and Avishek Ghosh and Suhas S Kowshik },
  journal={arXiv preprint arXiv:2505.12010},
  year={ 2025 }
}
Comments on this paper