Privacy-aware Berrut Approximated Coded Computing applied to general distributed learning

Coded computing is one of the techniques that can be used for privacy protection in Federated Learning. However, most of the constructions used for coded computing work only under the assumption that the computations involved are exact, generally restricted to special classes of functions, and require quantized inputs. This paper considers the use of Private Berrut Approximate Coded Computing (PBACC) as a general solution to add strong but non-perfect privacy to federated learning. We derive new adapted PBACC algorithms for centralized aggregation, secure distributed training with centralized data, and secure decentralized training with decentralized data, thus enlarging significantly the applications of the method and the existing privacy protection tools available for these paradigms. Particularly, PBACC can be used robustly to attain privacy guarantees in decentralized federated learning for a variety of models. Our numerical results show that the achievable quality of different learning models (convolutional neural networks, variational autoencoders, and Cox regression) is minimally altered by using these new computing schemes, and that the privacy leakage can be bounded strictly to less than a fraction of one bit per participant. Additionally, the computational cost of the encoding and decoding processes depends only of the degree of decentralization of the data.
View on arXiv@article{martínez-luaña2025_2505.06759, title={ Privacy-aware Berrut Approximated Coded Computing applied to general distributed learning }, author={ Xavier Martínez-Luaña and Manuel Fernández-Veiga and Rebeca P. Díaz-Redondo and Ana Fernández-Vilas }, journal={arXiv preprint arXiv:2505.06759}, year={ 2025 } }