Shielding Latent Face Representations From Privacy Attacks

In today's data-driven analytics landscape, deep learning has become a powerful tool, with latent representations, known as embeddings, playing a central role in several applications. In the face analytics domain, such embeddings are commonly used for biometric recognition (e.g., face identification). However, these embeddings, or templates, can inadvertently expose sensitive attributes such as age, gender, and ethnicity. Leaking such information can compromise personal privacy and affect civil liberty and human rights. To address these concerns, we introduce a multi-layer protection framework for embeddings. It consists of a sequence of operations: (a) encrypting embeddings using Fully Homomorphic Encryption (FHE), and (b) hashing them using irreversible feature manifold hashing. Unlike conventional encryption methods, FHE enables computations directly on encrypted data, allowing downstream analytics while maintaining strong privacy guarantees. To reduce the overhead of encrypted processing, we employ embedding compression. Our proposed method shields latent representations of sensitive data from leaking private attributes (such as age and gender) while retaining essential functional capabilities (such as face identification). Extensive experiments on two datasets using two face encoders demonstrate that our approach outperforms several state-of-the-art privacy protection methods.
View on arXiv@article{kaushik2025_2505.12688, title={ Shielding Latent Face Representations From Privacy Attacks }, author={ Arjun Ramesh Kaushik and Bharat Chandra Yalavarthi and Arun Ross and Vishnu Boddeti and Nalini Ratha }, journal={arXiv preprint arXiv:2505.12688}, year={ 2025 } }