29

Secure and Privacy-Preserving Vertical Federated Learning

Shan Jin
Sai Rahul Rachuri
Yizhen Wang
Anderson C.A. Nascimento
Yiwei Cai
Main:12 Pages
3 Figures
Bibliography:3 Pages
9 Tables
Appendix:6 Pages
Abstract

We propose a novel end-to-end privacy-preserving framework, instantiated by three efficient protocols for different deployment scenarios, covering both input and output privacy, for the vertically split scenario in federated learning (FL), where features are split across clients and labels are not shared by all parties. We do so by distributing the role of the aggregator in FL into multiple servers and having them run secure multiparty computation (MPC) protocols to perform model and feature aggregation and apply differential privacy (DP) to the final released model. While a naive solution would have the clients delegating the entirety of training to run in MPC between the servers, our optimized solution, which supports purely global and also global-local models updates with privacy-preserving, drastically reduces the amount of computation and communication performed using multiparty computation. The experimental results also show the effectiveness of our protocols.

View on arXiv
Comments on this paper