ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08232
219
1152
v1v2v3 (latest)

The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets

22 February 2018
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
Basel Alomair
ArXiv (abs)PDFHTML
Abstract

This paper describes a testing methodology for quantitatively assessing the risk of unintended memorization of rare or unique sequences in generative sequence models---a common type of neural network. Such models are sometimes trained on sensitive data (e.g., the text of users' private messages); our methodology allows deep-learning practitioners to choose configurations that minimize memorization during training, thereby benefiting privacy. In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, if not addressed during training, we show that new, efficient procedures can allow extracting unique, secret sequences such as credit card numbers from trained models. We also show that our testing strategy is practical and easy-to-apply, e.g., by describing its use for quantitatively preventing data exposure in Smart Compose, a production, commercial neural network trained on millions of users' email messages.

View on arXiv
Comments on this paper