ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06808
  4. Cited By

Privacy Auditing of Large Language Models

9 March 2025
Ashwinee Panda
Xinyu Tang
Milad Nasr
Christopher A. Choquette-Choo
Prateek Mittal
    PILM
ArXiv (abs)PDFHTML

Papers citing "Privacy Auditing of Large Language Models"

38 / 38 papers shown
Title
Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models
Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models
Jamie Hayes
Ilia Shumailov
Christopher A. Choquette-Choo
Matthew Jagielski
G. Kaissis
...
Matthieu Meeus
Yves-Alexandre de Montjoye
Franziska Boenisch
Adam Dziedzic
A. Feder Cooper
42
1
0
24 May 2025
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Hao Du
Shang Liu
Yang Cao
AAML
125
0
0
28 Apr 2025
Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents
Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents
Chong Chen
Zhiping Zhang
Ibrahim Khalilov
Bingcan Guo
Simret Araya Gebreegziabher
Yanfang Ye
Ziang Xiao
Yaxing Yao
Tianshi Li
T. Li
LLMAGELM
160
2
0
24 Apr 2025
Empirical Privacy Variance
Empirical Privacy Variance
Yuzheng Hu
Fan Wu
Ruicheng Xian
Yuhang Liu
Lydia Zakynthinou
Pritish Kamath
Chiyuan Zhang
David A. Forsyth
137
0
0
16 Mar 2025
Nob-MIAs: Non-biased Membership Inference Attacks Assessment on Large
  Language Models with Ex-Post Dataset Construction
Nob-MIAs: Non-biased Membership Inference Attacks Assessment on Large Language Models with Ex-Post Dataset Construction
Cédric Eichler
Nathan Champeil
Nicolas Anciaux
Alexandra Bensamoun
Héber H. Arcolezi
José Maria De Fuentes
88
4
0
12 Aug 2024
Seeing Is Believing: Black-Box Membership Inference Attacks Against
  Retrieval Augmented Generation
Seeing Is Believing: Black-Box Membership Inference Attacks Against Retrieval Augmented Generation
Yongqian Li
Gaoyang Liu
Yang Yang
Chen Wang
AAML
60
5
0
27 Jun 2024
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Blind Baselines Beat Membership Inference Attacks for Foundation Models
Debeshee Das
Jie Zhang
Florian Tramèr
MIALM
166
39
1
23 Jun 2024
Is My Data in Your Retrieval Database? Membership Inference Attacks Against Retrieval Augmented Generation
Is My Data in Your Retrieval Database? Membership Inference Attacks Against Retrieval Augmented Generation
Maya Anderson
Guy Amit
Abigail Goldsteen
AAML
133
19
0
30 May 2024
Teach LLMs to Phish: Stealing Private Information from Language Models
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda
Christopher A. Choquette-Choo
Zhengming Zhang
Yaoqing Yang
Prateek Mittal
PILM
110
26
0
01 Mar 2024
PANORAMIA: Privacy Auditing of Machine Learning Models without
  Retraining
PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining
Mishaal Kazmi
H. Lautraite
Alireza Akbari
Mauricio Soroco
Qiaoyue Tang
Tao Wang
Sébastien Gambs
Mathias Lécuyer
91
11
0
12 Feb 2024
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Xinyu Tang
Ashwinee Panda
Milad Nasr
Saeed Mahloujifar
Prateek Mittal
205
26
0
09 Jan 2024
Detecting Pretraining Data from Large Language Models
Detecting Pretraining Data from Large Language Models
Weijia Shi
Anirudh Ajith
Mengzhou Xia
Yangsibo Huang
Daogao Liu
Terra Blevins
Danqi Chen
Luke Zettlemoyer
MIALM
105
201
0
25 Oct 2023
Membership Inference Attacks against Language Models via Neighbourhood
  Comparison
Membership Inference Attacks against Language Models via Neighbourhood Comparison
Justus Mattern
Fatemehsadat Mireshghallah
Zhijing Jin
Bernhard Schölkopf
Mrinmaya Sachan
Taylor Berg-Kirkpatrick
MIALM
116
190
0
29 May 2023
Unleashing the Power of Randomization in Auditing Differentially Private
  ML
Unleashing the Power of Randomization in Auditing Differentially Private ML
Krishna Pillutla
Galen Andrew
Peter Kairouz
H. B. McMahan
Alina Oprea
Sewoong Oh
84
23
0
29 May 2023
Tight Auditing of Differentially Private Machine Learning
Tight Auditing of Differentially Private Machine Learning
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
75
53
0
15 Feb 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
153
234
0
01 Feb 2023
A New Linear Scaling Rule for Private Adaptive Hyperparameter
  Optimization
A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization
Ashwinee Panda
Xinyu Tang
Saeed Mahloujifar
Vikash Sehwag
Prateek Mittal
114
12
0
08 Dec 2022
Exploring the Limits of Differentially Private Deep Learning with
  Group-wise Clipping
Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping
Jiyan He
Xuechen Li
Da Yu
Huishuai Zhang
Janardhan Kulkarni
Y. Lee
A. Backurs
Nenghai Yu
Jiang Bian
115
49
0
03 Dec 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
149
111
0
30 Jun 2022
The Privacy Onion Effect: Memorization is Relative
The Privacy Onion Effect: Memorization is Relative
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
PILMMIACV
136
110
0
21 Jun 2022
Are Large Pre-Trained Language Models Leaking Your Personal Information?
Are Large Pre-Trained Language Models Leaking Your Personal Information?
Jie Huang
Hanyin Shao
Kevin Chen-Chuan Chang
PILM
102
203
0
25 May 2022
Memorization Without Overfitting: Analyzing the Training Dynamics of
  Large Language Models
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
Kushal Tirumala
Aram H. Markosyan
Luke Zettlemoyer
Armen Aghajanyan
TDI
114
197
0
22 May 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILMLRM
540
6,301
0
05 Apr 2022
Debugging Differential Privacy: A Case Study for Privacy Auditing
Debugging Differential Privacy: A Case Study for Privacy Auditing
Florian Tramèr
Andreas Terzis
Thomas Steinke
Shuang Song
Matthew Jagielski
Nicholas Carlini
74
46
0
24 Feb 2022
Quantifying Memorization Across Neural Language Models
Quantifying Memorization Across Neural Language Models
Nicholas Carlini
Daphne Ippolito
Matthew Jagielski
Katherine Lee
Florian Tramèr
Chiyuan Zhang
PILM
127
630
0
15 Feb 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACVMIALM
91
711
0
07 Dec 2021
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
246
372
0
13 Oct 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
Basel Alomair
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAUSILM
525
1,956
0
14 Dec 2020
Label-Only Membership Inference Attacks
Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACVMIALM
106
518
0
28 Jul 2020
Auditing Differentially Private Machine Learning: How Private is Private
  SGD?
Auditing Differentially Private Machine Learning: How Private is Private SGD?
Matthew Jagielski
Jonathan R. Ullman
Alina Oprea
FedML
78
250
0
13 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
911
42,520
0
28 May 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
521
20,376
0
23 Oct 2019
The Secret Sharer: Evaluating and Testing Unintended Memorization in
  Neural Networks
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
Basel Alomair
168
1,150
0
22 Feb 2018
Personalizing Dialogue Agents: I have a dog, do you have pets too?
Personalizing Dialogue Agents: I have a dog, do you have pets too?
Saizheng Zhang
Emily Dinan
Jack Urbanek
Arthur Szlam
Douwe Kiela
Jason Weston
131
1,465
0
22 Jan 2018
The E2E Dataset: New Challenges For End-to-End Generation
The E2E Dataset: New Challenges For End-to-End Generation
Jekaterina Novikova
Ondrej Dusek
Verena Rieser
116
462
0
28 Jun 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
283
4,168
0
18 Oct 2016
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedMLSyDa
223
6,180
0
01 Jul 2016
The Composition Theorem for Differential Privacy
The Composition Theorem for Differential Privacy
Peter Kairouz
Sewoong Oh
Pramod Viswanath
172
685
0
04 Nov 2013
1