388

Towards Generalized and Distributed Privacy-Preserving Representation Learning

International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Abstract

Privacy-preserving representation learning (PPRL) aims to learn a data encoding that obfuscates sensitive information and retains target information. We develop the Exclusion-Inclusion Generative Adversarial Network (EIGAN), which generalizes existing adversarial PPRL approaches to account for multiple, potentially overlapping ally and adversary objectives in a dataset. We further extend EIGAN to the case where the data is distributed and cannot be centrally aggregated for training due to privacy constraints. In doing so, we introduce D-EIGAN, the first distributed PPRL method, which decentralizes EIGAN training based on federated learning with fractional parameter sharing. We theoretically analyze the convergence of EIGAN and behavior of adversaries under the optimal EIGAN and D-EIGAN encoders, considering the impact of dependencies among target and sensitive objectives on the encoder performance. Our experiments demonstrate the advantages of EIGAN encodings in terms of accuracy, robustness, and scalability; EIGAN outperforms the previous state-of-the-art in centralized PPRL by a significant margin (47%). The experiments further reveal that D-EIGAN's performance is consistent with that of EIGAN under different node data distributions and is resilient to communication constraints.

View on arXiv
Comments on this paper