SensitiveNets: Learning Agnostic Representations with Application to Face Images
- CVBM

This work proposes a novel neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data. Our work is in part motivated by the new international regulation for personal data protection, which forces data controllers to guarantee privacy and avoid discriminative hazards while managing sensitive data of users. Instead of existing approaches aimed directly at fairness improvement, the proposed feature representation enforces the privacy of selected attributes. This way fairness is not the objective, but the result of a privacy-preserving learning method. This approach guarantees that sensitive information cannot be exploited by any agent who process the output of the model, ensuring both privacy and equality of opportunity. Our method is based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective. The method is evaluated on face recognition technologies using state-of-the-art algorithms and three publicly available benchmarks. In addition, we present a new annotation dataset with balanced distribution between genders and ethnic origins. The dataset includes more than 120K images from 24K identities. The experiments demonstrate that it is possible to improve the privacy and equality of opportunity while retaining competitive performance in recognition tasks.
View on arXiv