ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.04296
44
2

Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty

9 July 2021
Moritz Knolle
Alexander Ziller
Dmitrii Usynin
R. Braren
Marcus R. Makowski
Daniel Rueckert
Georgios Kaissis
    UQCV
ArXivPDFHTML
Abstract

We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highlight and exploit parallels between stochastic gradient Langevin dynamics, a scalable Bayesian inference technique for training deep neural networks, and DP-SGD, in order to train differentially private, Bayesian neural networks with minor adjustments to the original (DP-SGD) algorithm. Our approach provides considerably more reliable uncertainty estimates than DP-SGD, as demonstrated empirically by a reduction in expected calibration error (MNIST ∼5\sim{5}∼5-fold, Pediatric Pneumonia Dataset ∼2\sim{2}∼2-fold).

View on arXiv
Comments on this paper