Comparison of Privacy-Preserving Distributed Deep Learning Methods in Healthcare
M. Gawali
S. ArvindC.
Shriya Suryavanshi
Harshit Madaan
A. Gaikwad
KN BhanuPrakash
V. Kulkarni
Aniruddha Pant

Abstract
In this paper, we compare three privacy-preserving distributed learning techniques: federated learning, split learning, and SplitFed. We use these techniques to develop binary classification models for detecting tuberculosis from chest X-rays and compare them in terms of classification performance, communication and computational costs, and training time. We propose a novel distributed learning architecture called SplitFedv3, which performs better than split learning and SplitFedv2 in our experiments. We also propose alternate mini-batch training, a new training technique for split learning, that performs better than alternate client training, where clients take turns to train a model.
View on arXivComments on this paper