ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.08557
11
27

DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy

16 October 2021
Anda Cheng
Jiaxing Wang
Xi Sheryl Zhang
Qiang Chen
Peisong Wang
Jian Cheng
ArXivPDFHTML
Abstract

Training deep neural networks (DNNs) for meaningful differential privacy (DP) guarantees severely degrades model utility. In this paper, we demonstrate that the architecture of DNNs has a significant impact on model utility in the context of private deep learning, whereas its effect is largely unexplored in previous studies. In light of this missing, we propose the very first framework that employs neural architecture search to automatic model design for private deep learning, dubbed as DPNAS. To integrate private learning with architecture search, we delicately design a novel search space and propose a DP-aware method for training candidate models. We empirically certify the effectiveness of the proposed framework. The searched model DPNASNet achieves state-of-the-art privacy/utility trade-offs, e.g., for the privacy budget of (ϵ,δ)=(3,1×10−5)(\epsilon, \delta)=(3, 1\times10^{-5})(ϵ,δ)=(3,1×10−5), our model obtains test accuracy of 98.57%98.57\%98.57% on MNIST, 88.09%88.09\%88.09% on FashionMNIST, and 68.33%68.33\%68.33% on CIFAR-10. Furthermore, by studying the generated architectures, we provide several intriguing findings of designing private-learning-friendly DNNs, which can shed new light on model design for deep learning with differential privacy.

View on arXiv
Comments on this paper