ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.08440
  4. Cited By
On the Importance of Difficulty Calibration in Membership Inference
  Attacks

On the Importance of Difficulty Calibration in Membership Inference Attacks

15 November 2021
Lauren Watson
Chuan Guo
Graham Cormode
Alex Sablayrolles
ArXivPDFHTML

Papers citing "On the Importance of Difficulty Calibration in Membership Inference Attacks"

30 / 30 papers shown
Title
Measuring Déjà vu Memorization Efficiently
Measuring Déjà vu Memorization Efficiently
Narine Kokhlikyan
Bargav Jayaraman
Florian Bordes
Chuan Guo
Kamalika Chaudhuri
30
1
0
08 Apr 2025
Is My Text in Your AI Model? Gradient-based Membership Inference Test applied to LLMs
Gonzalo Mancera
Daniel DeAlcala
Julian Fierrez
Ruben Tolosana
Aythami Morales
48
1
0
10 Mar 2025
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Yu He
Boheng Li
L. Liu
Zhongjie Ba
Wei Dong
Yiming Li
Zhanyue Qin
Kui Ren
Chong Chen
MIALM
74
0
0
26 Feb 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
80
0
0
24 Feb 2025
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Sanghyun Hong
Fan Wu
A. Gruber
Kookjin Lee
42
0
0
12 Jan 2025
Synthetic Data Privacy Metrics
Synthetic Data Privacy Metrics
Amy Steier
Lipika Ramaswamy
Andre Manoel
Alexa Haushalter
45
0
0
08 Jan 2025
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Weichao Zhang
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
38
8
0
23 Sep 2024
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Cheng Wang
Yiwei Wang
Bryan Hooi
Yujun Cai
Nanyun Peng
Kai-Wei Chang
42
3
0
05 Sep 2024
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
57
1
0
05 Sep 2024
Noisy Neighbors: Efficient membership inference attacks against LLMs
Noisy Neighbors: Efficient membership inference attacks against LLMs
Filippo Galli
Luca Melis
Tommaso Cucinotta
51
7
0
24 Jun 2024
OSLO: One-Shot Label-Only Membership Inference Attacks
OSLO: One-Shot Label-Only Membership Inference Attacks
Yuefeng Peng
Jaechul Roh
Subhransu Maji
Amir Houmansadr
44
0
0
27 May 2024
Data Reconstruction: When You See It and When You Don't
Data Reconstruction: When You See It and When You Don't
Edith Cohen
Haim Kaplan
Yishay Mansour
Shay Moran
Kobbi Nissim
Uri Stemmer
Eliad Tsfadia
AAML
42
2
0
24 May 2024
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Jingyang Zhang
Jingwei Sun
Eric C. Yeats
Ouyang Yang
Martin Kuo
Jianyi Zhang
Hao Frank Yang
Hai "Helen" Li
43
42
0
03 Apr 2024
Watermarking Makes Language Models Radioactive
Watermarking Makes Language Models Radioactive
Tom Sander
Pierre Fernandez
Alain Durmus
Matthijs Douze
Teddy Furon
WaLM
41
11
0
22 Feb 2024
White-box Membership Inference Attacks against Diffusion Models
White-box Membership Inference Attacks against Diffusion Models
Yan Pang
Tianhao Wang
Xu Kang
Mengdi Huai
Yang Zhang
AAML
DiffM
46
22
0
11 Aug 2023
Membership inference attack with relative decision boundary distance
Membership inference attack with relative decision boundary distance
Jiacheng Xu
Chengxiang Tan
26
1
0
07 Jun 2023
A Note On Interpreting Canary Exposure
A Note On Interpreting Canary Exposure
Matthew Jagielski
20
4
0
31 May 2023
Unique in the Smart Grid -The Privacy Cost of Fine-Grained Electrical
  Consumption Data
Unique in the Smart Grid -The Privacy Cost of Fine-Grained Electrical Consumption Data
Antonin Voyez
T. Allard
G. Avoine
P. Cauchois
Elisa Fromont
Matthieu Simonin
10
3
0
14 Nov 2022
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
  Testing: A Lesson From Fano
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Chuan Guo
Alexandre Sablayrolles
Maziar Sanjabi
FedML
29
17
0
24 Oct 2022
Membership Inference Attacks by Exploiting Loss Trajectory
Membership Inference Attacks by Exploiting Loss Trajectory
Yiyong Liu
Zhengyu Zhao
Michael Backes
Yang Zhang
24
98
0
31 Aug 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
66
102
0
30 Jun 2022
Membership Inference Attack Using Self Influence Functions
Membership Inference Attack Using Self Influence Functions
Gilad Cohen
Raja Giryes
TDI
32
12
0
26 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
38
107
0
31 Mar 2022
An Efficient Subpopulation-based Membership Inference Attack
An Efficient Subpopulation-based Membership Inference Attack
Shahbaz Rezaei
Xin Liu
MIACV
29
5
0
04 Mar 2022
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Deduplicating Training Data Mitigates Privacy Risks in Language Models
Nikhil Kandpal
Eric Wallace
Colin Raffel
PILM
MU
51
274
0
14 Feb 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
29
642
0
07 Dec 2021
Opacus: User-Friendly Differential Privacy Library in PyTorch
Opacus: User-Friendly Differential Privacy Library in PyTorch
Ashkan Yousefpour
I. Shilov
Alexandre Sablayrolles
Davide Testuggine
Karthik Prasad
...
Sayan Gosh
Akash Bharadwaj
Jessica Zhao
Graham Cormode
Ilya Mironov
VLM
168
350
0
25 Sep 2021
Federated Learning with Buffered Asynchronous Aggregation
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
33
288
0
11 Jun 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,824
0
14 Dec 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
358
0
24 Mar 2020
1