43
0

Federated Learning Framework via Distributed Mutual Learning

Abstract

Federated Learning often relies on sharing full or partial model weights, which can burden network bandwidth and raise privacy risks. We present a loss-based alternative using distributed mutual learning. Instead of transmitting weights, clients periodically share their loss predictions on a public test set. Each client then refines its model by combining its local loss with the average Kullback-Leibler divergence over losses from other clients. This collaborative approach both reduces transmission overhead and preserves data privacy. Experiments on a face mask detection task demonstrate that our method outperforms weight-sharing baselines, achieving higher accuracy on unseen data while providing stronger generalization and privacy benefits.

View on arXiv
@article{gupta2025_2503.05803,
  title={ Federated Learning Framework via Distributed Mutual Learning },
  author={ Yash Gupta },
  journal={arXiv preprint arXiv:2503.05803},
  year={ 2025 }
}
Comments on this paper