ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.03570
  4. Cited By
Membership Inference Attacks From First Principles

Membership Inference Attacks From First Principles

7 December 2021
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
    MIACV
    MIALM
ArXivPDFHTML

Papers citing "Membership Inference Attacks From First Principles"

50 / 79 papers shown
Title
Unveiling Impact of Frequency Components on Membership Inference Attacks for Diffusion Models
Unveiling Impact of Frequency Components on Membership Inference Attacks for Diffusion Models
Puwei Lian
Yujun Cai
Songze Li
72
0
0
27 May 2025
Covert Attacks on Machine Learning Training in Passively Secure MPC
Covert Attacks on Machine Learning Training in Passively Secure MPC
Matthew Jagielski
Daniel Escudero
Rahul Rachuri
Peter Scholl
75
0
0
21 May 2025
SEPS: A Separability Measure for Robust Unlearning in LLMs
SEPS: A Separability Measure for Robust Unlearning in LLMs
Wonje Jeung
Sangyeon Yoon
Albert No
MU
VLM
209
0
0
20 May 2025
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Adversarial Attacks in Multimodal Systems: A Practitioner's Survey
Shashank Kapoor
Sanjay Surendranath Girija
Lakshit Arora
Dipen Pradhan
Ankit Shetgaonkar
Aman Raj
AAML
124
0
0
06 May 2025
Learning on LLM Output Signatures for gray-box Behavior Analysis
Learning on LLM Output Signatures for gray-box Behavior Analysis
Guy Bar-Shalom
Fabrizio Frasca
Derek Lim
Yoav Gelberg
Yftah Ziser
Ran El-Yaniv
Gal Chechik
Haggai Maron
110
0
0
18 Mar 2025
DP-GPL: Differentially Private Graph Prompt Learning
DP-GPL: Differentially Private Graph Prompt Learning
Jing Xu
Franziska Boenisch
Iyiola Emmanuel Olatunji
Adam Dziedzic
AAML
102
0
0
13 Mar 2025
AMUN: Adversarial Machine UNlearning
AMUN: Adversarial Machine UNlearning
A. Boroojeny
Hari Sundaram
Varun Chandrasekaran
MU
AAML
74
0
0
02 Mar 2025
Atlas: A Framework for ML Lifecycle Provenance & Transparency
Atlas: A Framework for ML Lifecycle Provenance & Transparency
Marcin Spoczynski
Marcela S. Melara
Siyang Song
189
1
0
26 Feb 2025
Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Ayana Moshruba
Ihsen Alouani
Maryam Parsa
AAML
111
3
0
24 Feb 2025
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
Kaiyuan Zhang
Siyuan Cheng
Guangyu Shen
Bruno Ribeiro
Shengwei An
Pin-Yu Chen
Xinming Zhang
Ninghui Li
282
2
0
28 Jan 2025
Unlearning Clients, Features and Samples in Vertical Federated Learning
Unlearning Clients, Features and Samples in Vertical Federated Learning
Ayush K. Varshney
Konstantinos Vandikas
V. Torra
MU
70
1
0
23 Jan 2025
Synthetic Data Can Mislead Evaluations: Membership Inference as Machine Text Detection
Synthetic Data Can Mislead Evaluations: Membership Inference as Machine Text Detection
Ali Naseh
Niloofar Mireshghallah
89
0
0
20 Jan 2025
Rethinking Membership Inference Attacks Against Transfer Learning
Rethinking Membership Inference Attacks Against Transfer Learning
Yanwei Yue
Jing Chen
Qianru Fang
Kun He
Ziming Zhao
Hao Ren
Guowen Xu
Yang Liu
Yang Xiang
110
34
0
20 Jan 2025
CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers
CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers
Matan Ben-Tov
Daniel Deutch
Nave Frost
Mahmood Sharif
AAML
173
0
0
20 Jan 2025
Safeguarding System Prompts for LLMs
Safeguarding System Prompts for LLMs
Zhifeng Jiang
Zhihua Jin
Guoliang He
AAML
SILM
137
2
0
10 Jan 2025
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
154
0
0
02 Dec 2024
CLEAR: Character Unlearning in Textual and Visual Modalities
CLEAR: Character Unlearning in Textual and Visual Modalities
Alexey Dontsov
Dmitrii Korzh
Alexey Zhavoronkin
Boris Mikheev
Denis Bobkov
Aibek Alanov
Oleg Y. Rogov
Ivan Oseledets
Elena Tutubalina
MU
AILaw
VLM
106
5
0
23 Oct 2024
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
Yuqing Nie
Chong Wang
Kaidi Wang
Guoai Xu
Guosheng Xu
Haoyu Wang
OffRL
373
3
0
11 Oct 2024
Detecting Training Data of Large Language Models via Expectation Maximization
Detecting Training Data of Large Language Models via Expectation Maximization
Gyuwan Kim
Yang Li
Evangelia Spiliopoulou
Jie Ma
Miguel Ballesteros
William Yang Wang
MIALM
206
4
2
10 Oct 2024
Privacy Vulnerabilities in Marginals-based Synthetic Data
Privacy Vulnerabilities in Marginals-based Synthetic Data
Steven Golob
Sikha Pentyala
Anuar Maratkhan
Martine De Cock
54
4
0
07 Oct 2024
Ward: Provable RAG Dataset Inference via LLM Watermarks
Ward: Provable RAG Dataset Inference via LLM Watermarks
Nikola Jovanović
Robin Staab
Maximilian Baader
Martin Vechev
379
4
0
04 Oct 2024
Differentially Private Active Learning: Balancing Effective Data Selection and Privacy
Differentially Private Active Learning: Balancing Effective Data Selection and Privacy
Kristian Schwethelm
Johannes Kaiser
Jonas Kuntzer
Mehmet Yigitsoy
Daniel Rueckert
Georgios Kaissis
92
0
0
01 Oct 2024
On the Implicit Relation Between Low-Rank Adaptation and Differential Privacy
On the Implicit Relation Between Low-Rank Adaptation and Differential Privacy
Saber Malekmohammadi
G. Farnadi
200
2
0
26 Sep 2024
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Weichao Zhang
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
67
16
0
23 Sep 2024
Generated Data with Fake Privacy: Hidden Dangers of Fine-tuning Large Language Models on Generated Data
Generated Data with Fake Privacy: Hidden Dangers of Fine-tuning Large Language Models on Generated Data
Atilla Akkus
Mingjie Li
Junjie Chu
Junjie Chu
Michael Backes
Sinem Sav
Sinem Sav
SILM
SyDa
92
4
0
12 Sep 2024
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Cheng Wang
Yiwei Wang
Bryan Hooi
Yujun Cai
Nanyun Peng
Kai-Wei Chang
88
6
0
05 Sep 2024
Differentially Private Kernel Density Estimation
Differentially Private Kernel Density Estimation
Erzhi Liu
Jerry Yao-Chieh Hu
Alex Reneau
Zhao Song
Han Liu
96
3
0
03 Sep 2024
Range Membership Inference Attacks
Range Membership Inference Attacks
Jiashu Tao
Reza Shokri
93
2
0
09 Aug 2024
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Shaopeng Fu
Xuexue Sun
Ke Qing
Tianhang Zheng
Di Wang
AAML
MIACV
SILM
104
0
0
05 Aug 2024
A Benchmark for Multi-speaker Anonymization
A Benchmark for Multi-speaker Anonymization
Xiaoxiao Miao
Ruijie Tao
Chang Zeng
Xin Wang
66
1
0
08 Jul 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
116
17
0
25 Jun 2024
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods
Roy Xie
Junlin Wang
Ruomin Huang
Minxing Zhang
Rong Ge
Jian Pei
Neil Zhenqiang Gong
Bhuwan Dhingra
MIALM
98
17
0
23 Jun 2024
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Debeshee Das
Jie Zhang
Florian Tramèr
MIALM
137
35
1
23 Jun 2024
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
Tudor Cebere
A. Bellet
Nicolas Papernot
77
10
0
23 May 2024
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Chun-Yin Huang
Kartik Srinivas
Xin Zhang
Xiaoxiao Li
DD
110
6
0
19 May 2024
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
Jingyang Zhang
Jingwei Sun
Eric C. Yeats
Ouyang Yang
Martin Kuo
Jianyi Zhang
Hao Frank Yang
Hai "Helen" Li
95
50
0
03 Apr 2024
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
Chaoyu Zhang
Shaoyu Li
AILaw
91
4
0
25 Feb 2024
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
106
3
0
20 Nov 2023
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Pablo Piantanida
MIACV
82
2
0
20 Oct 2023
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Harsha Nori
Sergey Yekhanin
108
43
0
24 May 2023
Enhanced Membership Inference Attacks against Machine Learning Models
Enhanced Membership Inference Attacks against Machine Learning Models
Jiayuan Ye
Aadyaa Maddi
S. K. Murakonda
Vincent Bindschaedler
Reza Shokri
MIALM
MIACV
76
252
0
18 Nov 2021
On the Importance of Difficulty Calibration in Membership Inference
  Attacks
On the Importance of Difficulty Calibration in Membership Inference Attacks
Lauren Watson
Chuan Guo
Graham Cormode
Alex Sablayrolles
82
131
0
15 Nov 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
60
132
0
04 Feb 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine
  Learning
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACV
FedML
118
223
0
11 Jan 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
486
1,917
0
14 Dec 2020
When is Memorization of Irrelevant Training Data Necessary for
  High-Accuracy Learning?
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
292
99
0
11 Dec 2020
Sampling Attacks: Amplification of Membership Inference Attacks by
  Repeated Queries
Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries
Shadi Rahimian
Tribhuvanesh Orekondy
Mario Fritz
MIACV
38
25
0
01 Sep 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
140
464
0
09 Aug 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
70
245
0
30 Jul 2020
Label-Only Membership Inference Attacks
Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACV
MIALM
90
511
0
28 Jul 2020
12
Next