ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.12328
86
103
v1v2 (latest)

Toward Training at ImageNet Scale with Differential Privacy

28 January 2022
Alexey Kurakin
Shuang Song
Steve Chien
Roxana Geambasu
Andreas Terzis
Abhradeep Thakurta
ArXiv (abs)PDFHTMLGithub (37★)
Abstract

Differential privacy (DP) is the de facto standard for training machine learning (ML) models, including neural networks, while ensuring the privacy of individual examples in the training set. Despite a rich literature on how to train ML models with differential privacy, it remains extremely challenging to train real-life, large neural networks with both reasonable accuracy and privacy. We set out to investigate how to do this, using ImageNet image classification as a poster example of an ML task that is very challenging to resolve accurately with DP right now. This paper shares initial lessons from our effort, in the hope that it will inspire and inform other researchers to explore DP training at scale. We show approaches which help to make DP training faster, as well as model types and settings of the training process that tend to work better for DP. Combined, the methods we discuss let us train a Resnet-18 with differential privacy to 47.9% accuracy and privacy parameters ϵ=10,δ=10−6\epsilon = 10, \delta = 10^{-6}ϵ=10,δ=10−6, a significant improvement over "naive" DP-SGD training of Imagenet models but a far cry from the 75%75\%75% accuracy that can be obtained by the same network without privacy. We share our code at https://github.com/google-research/dp-imagenet calling for others to join us in moving the needle further on DP at scale.

View on arXiv
Comments on this paper