ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.14338
11
17

Blind Faith: Privacy-Preserving Machine Learning using Function Approximation

29 July 2021
Tanveer Khan
Alexandros Bakas
A. Michalas
ArXivPDFHTML
Abstract

Over the past few years, a tremendous growth of machine learning was brought about by a significant increase in adoption of cloud-based services. As a result, various solutions have been proposed in which the machine learning models run on a remote cloud provider. However, when such a model is deployed on an untrusted cloud, it is of vital importance that the users' privacy is preserved. To this end, we propose Blind Faith -- a machine learning model in which the training phase occurs in plaintext data, but the classification of the users' inputs is performed on homomorphically encrypted ciphertexts. To make our construction compatible with homomorphic encryption, we approximate the activation functions using Chebyshev polynomials. This allowed us to build a privacy-preserving machine learning model that can classify encrypted images. Blind Faith preserves users' privacy since it can perform high accuracy predictions by performing computations directly on encrypted data.

View on arXiv
Comments on this paper