ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07709
  4. Cited By
Auditing Differentially Private Machine Learning: How Private is Private
  SGD?

Auditing Differentially Private Machine Learning: How Private is Private SGD?

13 June 2020
Matthew Jagielski
Jonathan R. Ullman
Alina Oprea
    FedML
ArXivPDFHTML

Papers citing "Auditing Differentially Private Machine Learning: How Private is Private SGD?"

50 / 71 papers shown
Title
Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features
Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features
Monica Welfert
Nathan Stromberg
Mario Díaz
Lalitha Sankar
AAML
29
0
0
13 May 2025
DPImageBench: A Unified Benchmark for Differentially Private Image Synthesis
DPImageBench: A Unified Benchmark for Differentially Private Image Synthesis
Chen Gong
Kecen Li
Zinan Lin
Tianhao Wang
64
3
0
18 Mar 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
85
0
0
24 Feb 2025
Synthetic Data Privacy Metrics
Synthetic Data Privacy Metrics
Amy Steier
Lipika Ramaswamy
Andre Manoel
Alexa Haushalter
56
1
0
08 Jan 2025
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
93
0
0
02 Dec 2024
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
Thomas Steinke
Milad Nasr
Arun Ganesh
Borja Balle
Christopher A. Choquette-Choo
Matthew Jagielski
Jamie Hayes
Abhradeep Thakurta
Adam Smith
Andreas Terzis
36
7
0
08 Oct 2024
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data
Jie Zhang
Debeshee Das
Gautam Kamath
Florian Tramèr
MIALM
MIACV
238
16
1
29 Sep 2024
QueryCheetah: Fast Automated Discovery of Attribute Inference Attacks
  Against Query-Based Systems
QueryCheetah: Fast Automated Discovery of Attribute Inference Attacks Against Query-Based Systems
Bozhidar Stevanoski
Ana-Maria Cretu
Yves-Alexandre de Montjoye
AAML
23
1
0
03 Sep 2024
PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models
PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models
Huixuan Zhang
Yun Lin
Xiaojun Wan
53
0
0
26 Jun 2024
Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations
Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations
Vasisht Duddu
Oskari Jarvinen
Lachlan J. Gunn
Nirmal Asokan
74
1
0
25 Jun 2024
VFLGAN: Vertical Federated Learning-based Generative Adversarial Network
  for Vertically Partitioned Data Publication
VFLGAN: Vertical Federated Learning-based Generative Adversarial Network for Vertically Partitioned Data Publication
Xun Yuan
Yang Yang
P. Gope
A. Pasikhani
Biplab Sikdar
47
2
0
15 Apr 2024
A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
  Privacy and OOD Challenges
A Survey of Graph Neural Networks in Real world: Imbalance, Noise, Privacy and OOD Challenges
Wei Ju
Siyu Yi
Yifan Wang
Zhiping Xiao
Zhengyan Mao
...
Senzhang Wang
Xinwang Liu
Xiao Luo
Philip S. Yu
Ming Zhang
AI4CE
43
36
0
07 Mar 2024
TOFU: A Task of Fictitious Unlearning for LLMs
TOFU: A Task of Fictitious Unlearning for LLMs
Pratyush Maini
Zhili Feng
Avi Schwarzschild
Zachary Chase Lipton
J. Zico Kolter
MU
CLL
46
146
0
11 Jan 2024
Revealing the True Cost of Locally Differentially Private Protocols: An
  Auditing Perspective
Revealing the True Cost of Locally Differentially Private Protocols: An Auditing Perspective
Héber H. Arcolezi
Sébastien Gambs
45
1
0
04 Sep 2023
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
  Applications
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Yi Zhang
Yuying Zhao
Zhaoqing Li
Xueqi Cheng
Yu-Chiang Frank Wang
Olivera Kotevska
Philip S. Yu
Tyler Derr
31
10
0
31 Aug 2023
Epsilon*: Privacy Metric for Machine Learning Models
Epsilon*: Privacy Metric for Machine Learning Models
Diana M. Negoescu
H. González
Saad Eddin Al Orjany
Jilei Yang
Yuliia Lut
...
Xinyi Zheng
Zachariah Douglas
Vidita Nolkha
P. Ahammad
G. Samorodnitsky
37
2
0
21 Jul 2023
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Anvith Thudi
Hengrui Jia
Casey Meehan
Ilia Shumailov
Nicolas Papernot
38
3
0
01 Jul 2023
A Note On Interpreting Canary Exposure
A Note On Interpreting Canary Exposure
Matthew Jagielski
22
4
0
31 May 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
37
46
0
25 May 2023
Auditing and Generating Synthetic Data with Controllable Trust
  Trade-offs
Auditing and Generating Synthetic Data with Controllable Trust Trade-offs
Brian M. Belgodere
Pierre Dognin
Adam Ivankay
Igor Melnyk
Youssef Mroueh
...
Mattia Rigotti
Jerret Ross
Yair Schiff
Radhika Vedpathak
Richard A. Young
34
12
0
21 Apr 2023
How to DP-fy ML: A Practical Guide to Machine Learning with Differential
  Privacy
How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy
Natalia Ponomareva
Hussein Hazimeh
Alexey Kurakin
Zheng Xu
Carson E. Denison
H. B. McMahan
Sergei Vassilvitskii
Steve Chien
Abhradeep Thakurta
105
167
0
01 Mar 2023
Tight Auditing of Differentially Private Machine Learning
Tight Auditing of Differentially Private Machine Learning
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
35
52
0
15 Feb 2023
Privacy Risk for anisotropic Langevin dynamics using relative entropy
  bounds
Privacy Risk for anisotropic Langevin dynamics using relative entropy bounds
Anastasia Borovykh
N. Kantas
P. Parpas
G. Pavliotis
19
1
0
01 Feb 2023
Extracting Training Data from Diffusion Models
Extracting Training Data from Diffusion Models
Nicholas Carlini
Jamie Hayes
Milad Nasr
Matthew Jagielski
Vikash Sehwag
Florian Tramèr
Borja Balle
Daphne Ippolito
Eric Wallace
DiffM
73
572
0
30 Jan 2023
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David Evans
Boris Köpf
Andrew Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
47
35
0
21 Dec 2022
Privacy in Practice: Private COVID-19 Detection in X-Ray Images
  (Extended Version)
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)
Lucas Lange
Maja Schneider
Peter Christen
Erhard Rahm
24
7
0
21 Nov 2022
Provable Membership Inference Privacy
Provable Membership Inference Privacy
Zachary Izzo
Jinsung Yoon
Sercan Ö. Arik
James Zou
44
5
0
12 Nov 2022
TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data
TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data
F. Houssiau
James Jordon
Samuel N. Cohen
Owen Daniel
Andrew Elliott
James Geddes
C. Mole
Camila Rangel Smith
Lukasz Szpruch
36
45
0
12 Nov 2022
QuerySnout: Automating the Discovery of Attribute Inference Attacks
  against Query-Based Systems
QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems
Ana-Maria Cretu
F. Houssiau
Antoine Cully
Yves-Alexandre de Montjoye
AAML
21
10
0
09 Nov 2022
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
  Testing: A Lesson From Fano
Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Chuan Guo
Alexandre Sablayrolles
Maziar Sanjabi
FedML
29
17
0
24 Oct 2022
Generalised Likelihood Ratio Testing Adversaries through the
  Differential Privacy Lens
Generalised Likelihood Ratio Testing Adversaries through the Differential Privacy Lens
Georgios Kaissis
Alexander Ziller
Stefan Kolek Martinez de Azagra
Daniel Rueckert
12
0
0
24 Oct 2022
A General Framework for Auditing Differentially Private Machine Learning
A General Framework for Auditing Differentially Private Machine Learning
Fred Lu
Joseph Munoz
Maya Fuchs
Tyler LeBlond
Elliott Zaresky-Williams
Edward Raff
Francis Ferraro
Brian Testa
FedML
22
35
0
16 Oct 2022
Differentially Private Deep Learning with ModelMix
Differentially Private Deep Learning with ModelMix
Hanshen Xiao
Jun Wan
S. Devadas
29
3
0
07 Oct 2022
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated
  Learning
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning
Samuel Maddock
Alexandre Sablayrolles
Pierre Stock
FedML
25
22
0
06 Oct 2022
Algorithms that Approximate Data Removal: New Results and Limitations
Algorithms that Approximate Data Removal: New Results and Limitations
Vinith Suriyakumar
Ashia Wilson
MU
49
27
0
25 Sep 2022
Algorithms with More Granular Differential Privacy Guarantees
Algorithms with More Granular Differential Privacy Guarantees
Badih Ghazi
Ravi Kumar
Pasin Manurangsi
Thomas Steinke
69
6
0
08 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
55
13
0
08 Sep 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
39
30
0
25 Aug 2022
Private, Efficient, and Accurate: Protecting Models Trained by
  Multi-party Learning with Differential Privacy
Private, Efficient, and Accurate: Protecting Models Trained by Multi-party Learning with Differential Privacy
Wenqiang Ruan
Ming Xu
Wenjing Fang
Li Wang
Lei Wang
Wei Han
40
12
0
18 Aug 2022
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq
  Model
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Saleh Soltan
Shankar Ananthakrishnan
Jack G. M. FitzGerald
Rahul Gupta
Wael Hamza
...
Mukund Sridhar
Fabian Triefenbach
Apurv Verma
Gokhan Tur
Premkumar Natarajan
58
82
0
02 Aug 2022
Measuring Forgetting of Memorized Training Examples
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski
Om Thakkar
Florian Tramèr
Daphne Ippolito
Katherine Lee
...
Eric Wallace
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Chiyuan Zhang
TDI
75
102
0
30 Jun 2022
The Privacy Onion Effect: Memorization is Relative
The Privacy Onion Effect: Memorization is Relative
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
PILM
MIACV
35
102
0
21 Jun 2022
Disparate Impact in Differential Privacy from Gradient Misalignment
Disparate Impact in Differential Privacy from Gradient Misalignment
Maria S. Esipova
Atiyeh Ashari Ghomi
Yaqiao Luo
Jesse C. Cresswell
29
25
0
15 Jun 2022
Neurotoxin: Durable Backdoors in Federated Learning
Neurotoxin: Durable Backdoors in Federated Learning
Zhengming Zhang
Ashwinee Panda
Linyue Song
Yaoqing Yang
Michael W. Mahoney
Joseph E. Gonzalez
Kannan Ramchandran
Prateek Mittal
FedML
40
130
0
12 Jun 2022
Bayesian Estimation of Differential Privacy
Bayesian Estimation of Differential Privacy
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
A. Salem
Victor Rühle
Andrew Paverd
Mohammad Naseri
Boris Köpf
Daniel Jones
30
36
0
10 Jun 2022
Auditing Differential Privacy in High Dimensions with the Kernel Quantum
  Rényi Divergence
Auditing Differential Privacy in High Dimensions with the Kernel Quantum Rényi Divergence
Carles Domingo-Enrich
Youssef Mroueh
27
5
0
27 May 2022
How to Combine Membership-Inference Attacks on Multiple Updated Models
How to Combine Membership-Inference Attacks on Multiple Updated Models
Matthew Jagielski
Stanley Wu
Alina Oprea
Jonathan R. Ullman
Roxana Geambasu
29
10
0
12 May 2022
Synthetic Data -- what, why and how?
Synthetic Data -- what, why and how?
James Jordon
Lukasz Szpruch
F. Houssiau
M. Bottarelli
Giovanni Cherubin
Carsten Maple
Samuel N. Cohen
Adrian Weller
51
109
0
06 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
53
109
0
31 Mar 2022
Rethinking Portrait Matting with Privacy Preserving
Rethinking Portrait Matting with Privacy Preserving
Sihan Ma
Jizhizi Li
Jing Zhang
He-jun Zhang
Dacheng Tao
34
23
0
31 Mar 2022
12
Next