Towards Trustworthy Federated Learning with Untrusted Participants
- FedML

Resilience against malicious participants and data privacy are essential for trustworthy federated learning, yet achieving both with good utility typically requires the strong assumption of a trusted central server. This paper shows that a significantly weaker assumption suffices: each pair of participants shares a randomness seed unknown to others. In a setting where malicious participants may collude with an untrusted server, we propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection, using shared randomness between participants. We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy (DP) methods, which do not make any trust assumption, while approaching central DP utility, where the server is fully trusted. Empirical results on standard benchmarks validate CafCor's practicality, showing that privacy and robustness can coexist in distributed systems without sacrificing utility or trusting the server.
View on arXiv@article{allouah2025_2505.01874, title={ Towards Trustworthy Federated Learning with Untrusted Participants }, author={ Youssef Allouah and Rachid Guerraoui and John Stephan }, journal={arXiv preprint arXiv:2505.01874}, year={ 2025 } }