ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.11328
18
23

Assessing differentially private deep learning with Membership Inference

24 December 2019
Daniel Bernau
Philip-William Grassal
J. Robl
Florian Kerschbaum
    MIACV
    FedML
ArXivPDFHTML
Abstract

Attacks that aim to identify the training data of public neural networks represent a severe threat to the privacy of individuals participating in the training data set. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy and need to select meaningful privacy parameters ϵ\epsilonϵ which is challenging for non-privacy experts. We empirically compare local and central differential privacy mechanisms under white- and black-box membership inference to evaluate their relative privacy-accuracy trade-offs. We experiment with several datasets and show that this trade-off is similar for both types of mechanisms. This suggests that local differential privacy is a sound alternative to central differential privacy for differentially private deep learning, since small ϵ\epsilonϵ in central differential privacy and large ϵ\epsilonϵ in local differential privacy result in similar membership inference attack risk.

View on arXiv
Comments on this paper