ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08232
  4. Cited By
The Secret Sharer: Evaluating and Testing Unintended Memorization in
  Neural Networks

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

22 February 2018
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
D. Song
ArXivPDFHTML

Papers citing "The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks"

50 / 711 papers shown
Title
Preempting Text Sanitization Utility in Resource-Constrained Privacy-Preserving LLM Interactions
Robin Carpentier
B. Zhao
Hassan Jameel Asghar
Dali Kaafar
85
1
0
18 Nov 2024
CODECLEANER: Elevating Standards with A Robust Data Contamination Mitigation Toolkit
Jialun Cao
Songqiang Chen
Wuqi Zhang
Hau Ching Lo
Shing-Chi Cheung
39
0
0
16 Nov 2024
On the Privacy Risk of In-context Learning
On the Privacy Risk of In-context Learning
Haonan Duan
Adam Dziedzic
Mohammad Yaghini
Nicolas Papernot
Franziska Boenisch
SILM
PILM
61
36
0
15 Nov 2024
Measuring Non-Adversarial Reproduction of Training Data in Large
  Language Models
Measuring Non-Adversarial Reproduction of Training Data in Large Language Models
Michael Aerni
Javier Rando
Edoardo Debenedetti
Nicholas Carlini
Daphne Ippolito
F. Tramèr
44
3
0
15 Nov 2024
TEESlice: Protecting Sensitive Neural Network Models in Trusted
  Execution Environments When Attackers have Pre-Trained Models
TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models
Ding Li
Ziqi Zhang
Mengyu Yao
Y. Cai
Yao Guo
Xiangqun Chen
FedML
39
2
0
15 Nov 2024
On Active Privacy Auditing in Supervised Fine-tuning for White-Box
  Language Models
On Active Privacy Auditing in Supervised Fine-tuning for White-Box Language Models
Qian Sun
Hanpeng Wu
Xi Sheryl Zhang
36
0
0
11 Nov 2024
Slowing Down Forgetting in Continual Learning
Slowing Down Forgetting in Continual Learning
Pascal Janetzky
Tobias Schlagenhauf
Stefan Feuerriegel
CLL
34
0
0
11 Nov 2024
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based
  method
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method
Teodora Baluta
Pascal Lamblin
Daniel Tarlow
Fabian Pedregosa
Gintare Karolina Dziugaite
MU
32
1
0
07 Nov 2024
Membership Inference Attacks against Large Vision-Language Models
Membership Inference Attacks against Large Vision-Language Models
Zhan Li
Yongtao Wu
Yihang Chen
F. Tonin
Elias Abad Rocamora
V. Cevher
44
4
0
05 Nov 2024
TDDBench: A Benchmark for Training data detection
TDDBench: A Benchmark for Training data detection
Zhihao Zhu
Yi Yang
Defu Lian
49
0
0
05 Nov 2024
Trustworthy Federated Learning: Privacy, Security, and Beyond
Trustworthy Federated Learning: Privacy, Security, and Beyond
Chunlu Chen
Ji Liu
Haowen Tan
Xingjian Li
Kevin I-Kai Wang
Peng Li
Kouichi Sakurai
Dejing Dou
FedML
52
4
0
03 Nov 2024
Do LLMs Know to Respect Copyright Notice?
Do LLMs Know to Respect Copyright Notice?
Jialiang Xu
Shenglan Li
Zhaozhuo Xu
Denghui Zhang
40
2
0
02 Nov 2024
Public Domain 12M: A Highly Aesthetic Image-Text Dataset with Novel
  Governance Mechanisms
Public Domain 12M: A Highly Aesthetic Image-Text Dataset with Novel Governance Mechanisms
Jordan Meyer
Nick Padgett
Cullen Miller
Laura Exline
31
4
0
30 Oct 2024
Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina
Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina
Yuan Gao
Dokyun Lee
Gordon Burtch
Sina Fazelpour
LRM
56
7
0
25 Oct 2024
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Yujuan Fu
Özlem Uzuner
Meliha Yetisgen
Fei Xia
59
4
0
24 Oct 2024
Uncovering Attacks and Defenses in Secure Aggregation for Federated Deep
  Learning
Uncovering Attacks and Defenses in Secure Aggregation for Federated Deep Learning
Yiwei Zhang
R. Behnia
A. Yavuz
Reza Ebrahimi
E. Bertino
FedML
28
2
0
13 Oct 2024
Federated Learning in Practice: Reflections and Projections
Federated Learning in Practice: Reflections and Projections
Katharine Daly
Hubert Eichner
Peter Kairouz
H. B. McMahan
Daniel Ramage
Zheng Xu
FedML
53
5
0
11 Oct 2024
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
Yuqing Nie
Chong Wang
Kaidi Wang
Guoai Xu
Guosheng Xu
Haoyu Wang
OffRL
166
1
0
11 Oct 2024
Private Language Models via Truncated Laplacian Mechanism
Private Language Models via Truncated Laplacian Mechanism
Tianhao Huang
Tao Yang
Ivan Habernal
Lijie Hu
Di Wang
35
1
0
10 Oct 2024
Noise is All You Need: Private Second-Order Convergence of Noisy SGD
Noise is All You Need: Private Second-Order Convergence of Noisy SGD
Dmitrii Avdiukhin
Michael Dinitz
Chenglin Fan
G. Yaroslavtsev
34
0
0
09 Oct 2024
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense
Yixiang Qiu
Hongyao Yu
Hao Fang
Wenbo Yu
Wenbo Yu
Bin Chen
Shu-Tao Xia
Ke Xu
Ke Xu
AAML
37
1
0
07 Oct 2024
How Much Can We Forget about Data Contamination?
How Much Can We Forget about Data Contamination?
Sebastian Bordt
Suraj Srinivas
Valentyn Boreiko
U. V. Luxburg
48
1
0
04 Oct 2024
Fine-Tuning Language Models with Differential Privacy through Adaptive
  Noise Allocation
Fine-Tuning Language Models with Differential Privacy through Adaptive Noise Allocation
Xianzhi Li
Ran Zmigrod
Zhiqiang Ma
Xiaomo Liu
Xiaodan Zhu
21
1
0
03 Oct 2024
Mitigating Memorization In Language Models
Mitigating Memorization In Language Models
Mansi Sakarvadia
Aswathy Ajith
Arham Khan
Nathaniel Hudson
Caleb Geniesse
Kyle Chard
Yaoqing Yang
Ian Foster
Michael W. Mahoney
KELM
MU
58
1
0
03 Oct 2024
Undesirable Memorization in Large Language Models: A Survey
Undesirable Memorization in Large Language Models: A Survey
Ali Satvaty
Suzan Verberne
Fatih Turkmen
ELM
PILM
83
7
0
03 Oct 2024
Adaptively Private Next-Token Prediction of Large Language Models
Adaptively Private Next-Token Prediction of Large Language Models
James Flemings
Meisam Razaviyayn
Murali Annavaram
34
0
0
02 Oct 2024
Stars, Stripes, and Silicon: Unravelling the ChatGPT's All-American,
  Monochrome, Cis-centric Bias
Stars, Stripes, and Silicon: Unravelling the ChatGPT's All-American, Monochrome, Cis-centric Bias
Federico Torrielli
32
0
0
02 Oct 2024
Deep Unlearn: Benchmarking Machine Unlearning
Deep Unlearn: Benchmarking Machine Unlearning
Xavier F. Cadet
Anastasia Borovykh
Mohammad Malekzadeh
S. Ahmadi-Abhari
Hamed Haddadi
BDL
MU
37
1
0
02 Oct 2024
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
Jie Zhang
Debeshee Das
Gautam Kamath
Florian Tramèr
MIALM
MIACV
238
16
1
29 Sep 2024
Localizing Memorization in SSL Vision Encoders
Localizing Memorization in SSL Vision Encoders
Wenhao Wang
Adam Dziedzic
Michael Backes
Franziska Boenisch
34
2
0
27 Sep 2024
Predicting and analyzing memorization within fine-tuned Large Language
  Models
Predicting and analyzing memorization within fine-tuned Large Language Models
Jérémie Dentan
Davide Buscaldi
A. Shabou
Sonia Vanier
40
0
0
27 Sep 2024
Differentially Private Non Parametric Copulas: Generating synthetic data
  with non parametric copulas under privacy guarantees
Differentially Private Non Parametric Copulas: Generating synthetic data with non parametric copulas under privacy guarantees
Pablo A. Osorio-Marulanda
John Esteban Castro Ramirez
Mikel Hernández Jiménez
Nicolas Moreno Reyes
Gorka Epelde Unanue
SyDa
23
1
0
27 Sep 2024
Trustworthy AI: Securing Sensitive Data in Large Language Models
Trustworthy AI: Securing Sensitive Data in Large Language Models
G. Feretzakis
V. Verykios
29
10
0
26 Sep 2024
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Yi Zhang
Zhen Chen
Chih-Hong Cheng
Wenjie Ruan
Xiaowei Huang
Dezong Zhao
David Flynn
Siddartha Khastgir
Xingyu Zhao
MedIm
44
4
0
26 Sep 2024
On the Implicit Relation Between Low-Rank Adaptation and Differential Privacy
On the Implicit Relation Between Low-Rank Adaptation and Differential Privacy
Saber Malekmohammadi
G. Farnadi
29
2
0
26 Sep 2024
Data-centric NLP Backdoor Defense from the Lens of Memorization
Data-centric NLP Backdoor Defense from the Lens of Memorization
Zhenting Wang
Zhizhi Wang
Mingyu Jin
Mengnan Du
Juan Zhai
Shiqing Ma
33
3
0
21 Sep 2024
Training Large ASR Encoders with Differential Privacy
Training Large ASR Encoders with Differential Privacy
Geeticka Chauhan
Steve Chien
Om Thakkar
Abhradeep Thakurta
Arun Narayanan
33
1
0
21 Sep 2024
Visualizationary: Automating Design Feedback for Visualization Designers
  using LLMs
Visualizationary: Automating Design Feedback for Visualization Designers using LLMs
Sungbok Shin
Sanghyun Hong
Niklas Elmqvist
35
0
0
19 Sep 2024
A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender
  Systems: Insights and Future Research
A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender Systems: Insights and Future Research
Falguni Roy
Xiaofeng Ding
K. -K. R. Choo
Pan Zhou
FaML
28
0
0
19 Sep 2024
Extracting Memorized Training Data via Decomposition
Extracting Memorized Training Data via Decomposition
Ellen Su
Anu Vellore
Amy Chang
Raffaele Mura
Blaine Nelson
Paul Kassianik
Amin Karbasi
29
2
0
18 Sep 2024
MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts
MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts
Tianle Gu
Kexin Huang
Ruilin Luo
Yuanqi Yao
Yujiu Yang
Yan Teng
Yingchun Wang
MU
42
5
0
18 Sep 2024
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
57
1
0
05 Sep 2024
Differentially Private Kernel Density Estimation
Differentially Private Kernel Density Estimation
Erzhi Liu
Jerry Yao-Chieh Hu
Alex Reneau
Zhao Song
Han Liu
66
3
0
03 Sep 2024
Membership Inference Attacks Against In-Context Learning
Membership Inference Attacks Against In-Context Learning
Rui Wen
Zehan Li
Michael Backes
Yang Zhang
42
6
0
02 Sep 2024
Accurate Forgetting for All-in-One Image Restoration Model
Accurate Forgetting for All-in-One Image Restoration Model
Xin Su
Zhuoran Zheng
CLL
28
1
0
01 Sep 2024
Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language
  Models for Privacy Leakage
Forget to Flourish: Leveraging Machine-Unlearning on Pretrained Language Models for Privacy Leakage
Md. Rafi Ur Rashid
Jing Liu
T. Koike-Akino
Shagufta Mehnaz
Ye Wang
MU
SILM
46
3
0
30 Aug 2024
Investigating Privacy Leakage in Dimensionality Reduction Methods via
  Reconstruction Attack
Investigating Privacy Leakage in Dimensionality Reduction Methods via Reconstruction Attack
Chayadon Lumbut
Donlapark Ponnoprat
30
0
0
30 Aug 2024
LLM-PBE: Assessing Data Privacy in Large Language Models
LLM-PBE: Assessing Data Privacy in Large Language Models
Qinbin Li
Junyuan Hong
Chulin Xie
Jeffrey Tan
Rachel Xin
...
Dan Hendrycks
Zhangyang Wang
Bo Li
Bingsheng He
Dawn Song
ELM
PILM
40
13
0
23 Aug 2024
Strong Copyright Protection for Language Models via Adaptive Model
  Fusion
Strong Copyright Protection for Language Models via Adaptive Model Fusion
Javier Abad
Konstantin Donhauser
Francesco Pinto
Fanny Yang
45
4
0
29 Jul 2024
From Pre-training Corpora to Large Language Models: What Factors
  Influence LLM Performance in Causal Discovery Tasks?
From Pre-training Corpora to Large Language Models: What Factors Influence LLM Performance in Causal Discovery Tasks?
Tao Feng
Lizhen Qu
Niket Tandon
Zhuang Li
Xiaoxi Kang
Gholamreza Haffari
LRM
35
4
0
29 Jul 2024
Previous
12345...131415
Next