ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08232
  4. Cited By
The Secret Sharer: Evaluating and Testing Unintended Memorization in
  Neural Networks
v1v2v3 (latest)

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

22 February 2018
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
Basel Alomair
ArXiv (abs)PDFHTML

Papers citing "The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks"

50 / 441 papers shown
Title
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against
  Text Classification Models
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
Adel M. Elmahdy
A. Salem
SILM
121
6
0
23 Jun 2023
Protecting User Privacy in Remote Conversational Systems: A
  Privacy-Preserving framework based on text sanitization
Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization
Zhigang Kan
Linbo Qiao
Hao Yu
Liwen Peng
Yifu Gao
Dongsheng Li
101
21
0
14 Jun 2023
Quantifying Overfitting: Evaluating Neural Network Performance through
  Analysis of Null Space
Quantifying Overfitting: Evaluating Neural Network Performance through Analysis of Null Space
Hossein Rezaei
Mohammad Sabokrou
80
3
0
30 May 2023
Federated Learning of Gboard Language Models with Differential Privacy
Federated Learning of Gboard Language Models with Differential Privacy
Zheng Xu
Yanxiang Zhang
Galen Andrew
Christopher A. Choquette-Choo
Peter Kairouz
H. B. McMahan
Jesse Rosenstock
Yuanbo Zhang
FedML
122
82
0
29 May 2023
Unleashing the Power of Randomization in Auditing Differentially Private
  ML
Unleashing the Power of Randomization in Auditing Differentially Private ML
Krishna Pillutla
Galen Andrew
Peter Kairouz
H. B. McMahan
Alina Oprea
Sewoong Oh
93
23
0
29 May 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAMLSILM
87
51
0
28 May 2023
DPFormer: Learning Differentially Private Transformer on Long-Tailed
  Data
DPFormer: Learning Differentially Private Transformer on Long-Tailed Data
Youlong Ding
Xueyang Wu
Hongya Wang
Weike Pan
102
1
0
28 May 2023
Privacy Protectability: An Information-theoretical Approach
Privacy Protectability: An Information-theoretical Approach
Siping Shi
Bihai Zhang
Dan Wang
54
1
0
25 May 2023
Trade-Offs Between Fairness and Privacy in Language Modeling
Trade-Offs Between Fairness and Privacy in Language Modeling
Cleo Matzken
Steffen Eger
Ivan Habernal
SILM
115
6
0
24 May 2023
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Harsha Nori
Sergey Yekhanin
175
44
0
24 May 2023
Evaluating Privacy Leakage in Split Learning
Evaluating Privacy Leakage in Split Learning
Xinchi Qiu
Ilias Leontiadis
Luca Melis
Alex Sablayrolles
Pierre Stock
115
5
0
22 May 2023
Random Relabeling for Efficient Machine Unlearning
Random Relabeling for Efficient Machine Unlearning
Junde Li
Swaroop Ghosh
MU
85
3
0
21 May 2023
Controlling the Extraction of Memorized Data from Large Language Models
  via Prompt-Tuning
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Mustafa Safa Ozdayi
Charith Peris
Jack G. M. FitzGerald
Christophe Dupuy
Jimit Majmudar
Haidar Khan
Rahil Parikh
Rahul Gupta
70
34
0
19 May 2023
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even
  for Non-Convex Losses
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses
S. Asoodeh
Mario Díaz
72
6
0
17 May 2023
Patchwork Learning: A Paradigm Towards Integrative Analysis across
  Diverse Biomedical Data Sources
Patchwork Learning: A Paradigm Towards Integrative Analysis across Diverse Biomedical Data Sources
Suraj Rajendran
Weishen Pan
M. Sabuncu
Yong Chen
Jiayu Zhou
Fei Wang
102
14
0
10 May 2023
Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems
  using Differentially Private Language Models
Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models
Aldo G. Carranza
Rezsa Farahani
Natalia Ponomareva
Alexey Kurakin
Matthew Jagielski
Milad Nasr
SyDa
82
7
0
10 May 2023
Does Prompt-Tuning Language Model Ensure Privacy?
Does Prompt-Tuning Language Model Ensure Privacy?
Shangyu Xie
Wei Dai
Esha Ghosh
Sambuddha Roy
Dan Schwartz
Kim Laine
SILM
97
4
0
07 Apr 2023
Recognition, recall, and retention of few-shot memories in large
  language models
Recognition, recall, and retention of few-shot memories in large language models
A. Orhan
LRMKELMCLL
69
3
0
30 Mar 2023
Secret-Keeping in Question Answering
Secret-Keeping in Question Answering
Nathaniel W. Rollings
Kent O'Sullivan
Sakshum Kulshrestha
KELM
44
0
0
16 Mar 2023
Can Membership Inferencing be Refuted?
Can Membership Inferencing be Refuted?
Zhifeng Kong
A. Chowdhury
Kamalika Chaudhuri
MIALMMIACV
89
7
0
07 Mar 2023
Data-Copying in Generative Models: A Formal Framework
Data-Copying in Generative Models: A Formal Framework
Robi Bhattacharjee
S. Dasgupta
Kamalika Chaudhuri
TDI
90
8
0
25 Feb 2023
Tight Auditing of Differentially Private Machine Learning
Tight Auditing of Differentially Private Machine Learning
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
85
53
0
15 Feb 2023
Netflix and Forget: Efficient and Exact Machine Unlearning from
  Bi-linear Recommendations
Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations
Mimee Xu
Jiankai Sun
Xin Yang
K. Yao
Chong-Jun Wang
MUCMLCLL
52
13
0
13 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELMAAML
72
3
0
04 Feb 2023
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
DD
89
7
0
02 Feb 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
193
235
0
01 Feb 2023
Distributed sequential federated learning
Distributed sequential federated learning
Z. Wang
X. Y. Zhang
Yansong Chang
FedML
23
0
0
31 Jan 2023
Context-Aware Differential Privacy for Language Modeling
Context-Aware Differential Privacy for Language Modeling
M. H. Dinh
Ferdinando Fioretto
63
2
0
28 Jan 2023
Differentially Private Natural Language Models: Recent Advances and
  Future Directions
Differentially Private Natural Language Models: Recent Advances and Future Directions
Lijie Hu
Ivan Habernal
Lei Shen
Di Wang
AAML
94
19
0
22 Jan 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
64
4
0
18 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
69
37
0
06 Jan 2023
Model Segmentation for Storage Efficient Private Federated Learning with
  Top $r$ Sparsification
Model Segmentation for Storage Efficient Private Federated Learning with Top rrr Sparsification
Sajani Vithana
S. Ulukus
FedML
68
5
0
22 Dec 2022
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David Evans
Boris Köpf
Andrew Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
156
40
0
21 Dec 2022
Rate-Privacy-Storage Tradeoff in Federated Learning with Top $r$
  Sparsification
Rate-Privacy-Storage Tradeoff in Federated Learning with Top rrr Sparsification
Sajani Vithana
S. Ulukus
FedML
58
5
0
19 Dec 2022
Discovering Language Model Behaviors with Model-Written Evaluations
Discovering Language Model Behaviors with Model-Written Evaluations
Ethan Perez
Sam Ringer
Kamilė Lukošiūtė
Karina Nguyen
Edwin Chen
...
Danny Hernandez
Deep Ganguli
Evan Hubinger
Nicholas Schiefer
Jared Kaplan
ALM
102
407
0
19 Dec 2022
Swing Distillation: A Privacy-Preserving Knowledge Distillation
  Framework
Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Junzhuo Li
Xinwei Wu
Weilong Dong
Shuangzhi Wu
Chao Bian
Deyi Xiong
113
4
0
16 Dec 2022
Position: Considerations for Differentially Private Learning with
  Large-Scale Public Pretraining
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
Florian Tramèr
Gautam Kamath
Nicholas Carlini
SILM
129
72
0
13 Dec 2022
Near-Optimal Differentially Private Reinforcement Learning
Near-Optimal Differentially Private Reinforcement Learning
Dan Qiao
Yu Wang
92
14
0
09 Dec 2022
Skellam Mixture Mechanism: a Novel Approach to Federated Learning with
  Differential Privacy
Skellam Mixture Mechanism: a Novel Approach to Federated Learning with Differential Privacy
Ergute Bao
Yizheng Zhu
X. Xiao
Yifan Yang
Beng Chin Ooi
B. Tan
Khin Mi Mi Aung
FedML
82
19
0
08 Dec 2022
Memorization of Named Entities in Fine-tuned BERT Models
Memorization of Named Entities in Fine-tuned BERT Models
Andor Diera
N. Lell
Aygul Garifullina
A. Scherp
68
0
0
07 Dec 2022
LDL: A Defense for Label-Based Membership Inference Attacks
LDL: A Defense for Label-Based Membership Inference Attacks
Arezoo Rajabi
D. Sahabandu
Luyao Niu
Bhaskar Ramasubramanian
Radha Poovendran
AAML
49
4
0
03 Dec 2022
Exploring the Limits of Differentially Private Deep Learning with
  Group-wise Clipping
Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping
Jiyan He
Xuechen Li
Da Yu
Huishuai Zhang
Janardhan Kulkarni
Y. Lee
A. Backurs
Nenghai Yu
Jiang Bian
118
49
0
03 Dec 2022
Differentially Private Image Classification from Features
Differentially Private Image Classification from Features
Harsh Mehta
Walid Krichene
Abhradeep Thakurta
Alexey Kurakin
Ashok Cutkosky
113
8
0
24 Nov 2022
Rank-One Editing of Encoder-Decoder Models
Rank-One Editing of Encoder-Decoder Models
Vikas Raunak
Arul Menezes
KELM
83
10
0
23 Nov 2022
Privacy in Practice: Private COVID-19 Detection in X-Ray Images
  (Extended Version)
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)
Lucas Lange
Maja Schneider
Peter Christen
Erhard Rahm
91
7
0
21 Nov 2022
Large Language Models Struggle to Learn Long-Tail Knowledge
Large Language Models Struggle to Learn Long-Tail Knowledge
Nikhil Kandpal
H. Deng
Adam Roberts
Eric Wallace
Colin Raffel
RALMKELM
166
419
0
15 Nov 2022
SA-DPSGD: Differentially Private Stochastic Gradient Descent based on
  Simulated Annealing
SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing
Jie Fu
Zhili Chen
Xinpeng Ling
81
1
0
14 Nov 2022
Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning
Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning
Christopher A. Choquette-Choo
H. B. McMahan
Keith Rush
Abhradeep Thakurta
91
46
0
12 Nov 2022
Unintended Memorization and Timing Attacks in Named Entity Recognition
  Models
Unintended Memorization and Timing Attacks in Named Entity Recognition Models
Rana Salal Ali
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
Tham Nguyen
Ian D. Wood
Dali Kaafar
AAML
56
3
0
04 Nov 2022
User-Entity Differential Privacy in Learning Natural Language Models
User-Entity Differential Privacy in Learning Natural Language Models
Phung Lai
Nhathai Phan
Tong Sun
R. Jain
Franck Dernoncourt
Jiuxiang Gu
Nikolaos Barmpalios
FedML
74
0
0
01 Nov 2022
Previous
123456789
Next