ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.14457
  4. Cited By
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model

Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model

23 May 2024
Tudor Cebere
A. Bellet
Nicolas Papernot
ArXivPDFHTML

Papers citing "Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model"

42 / 42 papers shown
Title
Empirical Privacy Variance
Empirical Privacy Variance
Yuzheng Hu
Fan Wu
Ruicheng Xian
Yuhang Liu
Lydia Zakynthinou
Pritish Kamath
Chiyuan Zhang
David A. Forsyth
85
0
0
16 Mar 2025
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios
Sangyeon Yoon
Wonje Jeung
Albert No
121
0
0
02 Dec 2024
Data Deletion for Linear Regression with Noisy SGD
Data Deletion for Linear Regression with Noisy SGD
Zhangjie Xia
Chi-Hua Wang
Guang Cheng
79
2
0
12 Oct 2024
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
Thomas Steinke
Milad Nasr
Arun Ganesh
Borja Balle
Christopher A. Choquette-Choo
Matthew Jagielski
Jamie Hayes
Abhradeep Thakurta
Adam Smith
Andreas Terzis
82
7
0
08 Oct 2024
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With
  Non-Convex Loss
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss
Meenatchi Sundaram Muthu Selva Annamalai
61
8
0
09 Jul 2024
Nearly Tight Black-Box Auditing of Differentially Private Machine
  Learning
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
Meenatchi Sundaram Muthu Selva Annamalai
Emiliano De Cristofaro
75
12
0
23 May 2024
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
Shanglun Feng
Florian Tramèr
SILM
75
16
0
30 Mar 2024
Shifted Interpolation for Differential Privacy
Shifted Interpolation for Differential Privacy
Jinho Bok
Weijie Su
Jason M. Altschuler
85
9
0
01 Mar 2024
Unified Enhancement of Privacy Bounds for Mixture Mechanisms via
  $f$-Differential Privacy
Unified Enhancement of Privacy Bounds for Mixture Mechanisms via fff-Differential Privacy
Chendi Wang
Buxin Su
Jiayuan Ye
Reza Shokri
Weijie J. Su
FedML
52
11
0
30 Oct 2023
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even
  for Non-Convex Losses
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even for Non-Convex Losses
S. Asoodeh
Mario Díaz
46
6
0
17 May 2023
Privacy Auditing with One (1) Training Run
Privacy Auditing with One (1) Training Run
Thomas Steinke
Milad Nasr
Matthew Jagielski
88
79
0
15 May 2023
Tight Auditing of Differentially Private Machine Learning
Tight Auditing of Differentially Private Machine Learning
Milad Nasr
Jamie Hayes
Thomas Steinke
Borja Balle
Florian Tramèr
Matthew Jagielski
Nicholas Carlini
Andreas Terzis
FedML
51
50
0
15 Feb 2023
One-shot Empirical Privacy Estimation for Federated Learning
One-shot Empirical Privacy Estimation for Federated Learning
Galen Andrew
Peter Kairouz
Sewoong Oh
Alina Oprea
H. B. McMahan
Vinith Suriyakumar
FedML
79
35
0
06 Feb 2023
A General Framework for Auditing Differentially Private Machine Learning
A General Framework for Auditing Differentially Private Machine Learning
Fred Lu
Joseph Munoz
Maya Fuchs
Tyler LeBlond
Elliott Zaresky-Williams
Edward Raff
Francis Ferraro
Brian Testa
FedML
34
36
0
16 Oct 2022
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated
  Learning
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning
Samuel Maddock
Alexandre Sablayrolles
Pierre Stock
FedML
100
22
0
06 Oct 2022
Connect the Dots: Tighter Discrete Approximations of Privacy Loss
  Distributions
Connect the Dots: Tighter Discrete Approximations of Privacy Loss Distributions
Vadym Doroshenko
Badih Ghazi
Pritish Kamath
Ravi Kumar
Pasin Manurangsi
46
41
0
10 Jul 2022
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
56
134
0
15 Jun 2022
Privacy of Noisy Stochastic Gradient Descent: More Iterations without
  More Privacy Loss
Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
Jason M. Altschuler
Kunal Talwar
FedML
87
57
0
27 May 2022
Unlocking High-Accuracy Differentially Private Image Classification
  through Scale
Unlocking High-Accuracy Differentially Private Image Classification through Scale
Soham De
Leonard Berrada
Jamie Hayes
Samuel L. Smith
Borja Balle
66
223
0
28 Apr 2022
Differentially Private Learning Needs Hidden State (Or Much Faster
  Convergence)
Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)
Jiayuan Ye
Reza Shokri
FedML
66
44
0
10 Mar 2022
Reconstructing Training Data with Informed Adversaries
Reconstructing Training Data with Informed Adversaries
Borja Balle
Giovanni Cherubin
Jamie Hayes
MIACV
AAML
77
162
0
13 Jan 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
46
663
0
07 Dec 2021
Opacus: User-Friendly Differential Privacy Library in PyTorch
Opacus: User-Friendly Differential Privacy Library in PyTorch
Ashkan Yousefpour
I. Shilov
Alexandre Sablayrolles
Davide Testuggine
Karthik Prasad
...
Sayan Gosh
Akash Bharadwaj
Jessica Zhao
Graham Cormode
Ilya Mironov
VLM
219
357
0
25 Sep 2021
Adaptive Machine Unlearning
Adaptive Machine Unlearning
Varun Gupta
Christopher Jung
Seth Neel
Aaron Roth
Saeed Sharifi-Malvajerdi
Chris Waites
MU
53
177
0
08 Jun 2021
Numerical Composition of Differential Privacy
Numerical Composition of Differential Privacy
Sivakanth Gopi
Y. Lee
Lukas Wutschitz
44
176
0
05 Jun 2021
Achieving Linear Speedup with Partial Worker Participation in Non-IID
  Federated Learning
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
Haibo Yang
Minghong Fang
Jia Liu
FedML
50
256
0
27 Jan 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine
  Learning
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACV
FedML
105
219
0
11 Jan 2021
Privacy Amplification by Decentralization
Privacy Amplification by Decentralization
Edwige Cyffers
A. Bellet
FedML
87
40
0
09 Dec 2020
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
Seth Neel
Aaron Roth
Saeed Sharifi-Malvajerdi
MU
45
264
0
06 Jul 2020
Auditing Differentially Private Machine Learning: How Private is Private
  SGD?
Auditing Differentially Private Machine Learning: How Private is Private SGD?
Matthew Jagielski
Jonathan R. Ullman
Alina Oprea
FedML
53
240
0
13 Jun 2020
Machine Unlearning
Machine Unlearning
Lucas Bourtoule
Varun Chandrasekaran
Christopher A. Choquette-Choo
Hengrui Jia
Adelin Travers
Baiwu Zhang
David Lie
Nicolas Papernot
MU
103
830
0
09 Dec 2019
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
300
42,038
0
03 Dec 2019
Certified Data Removal from Machine Learning Models
Certified Data Removal from Machine Learning Models
Chuan Guo
Tom Goldstein
Awni Y. Hannun
Laurens van der Maaten
MU
77
434
0
08 Nov 2019
Privacy Amplification by Mixing and Diffusion Mechanisms
Privacy Amplification by Mixing and Diffusion Mechanisms
Borja Balle
Gilles Barthe
Marco Gaboardi
J. Geumlek
26
41
0
29 May 2019
Amplification by Shuffling: From Local to Central Differential Privacy
  via Anonymity
Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity
Ulfar Erlingsson
Vitaly Feldman
Ilya Mironov
A. Raghunathan
Kunal Talwar
Abhradeep Thakurta
170
423
0
29 Nov 2018
Privacy Amplification by Iteration
Privacy Amplification by Iteration
Vitaly Feldman
Ilya Mironov
Kunal Talwar
Abhradeep Thakurta
FedML
50
171
0
20 Aug 2018
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
216
4,075
0
18 Oct 2016
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedML
SyDa
170
6,069
0
01 Jul 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.5K
192,638
0
10 Dec 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
215
18,534
0
06 Feb 2015
The Composition Theorem for Differential Privacy
The Composition Theorem for Differential Privacy
Peter Kairouz
Sewoong Oh
Pramod Viswanath
93
677
0
04 Nov 2013
What Can We Learn Privately?
What Can We Learn Privately?
S. Kasiviswanathan
Homin K. Lee
Kobbi Nissim
Sofya Raskhodnikova
Adam D. Smith
99
1,459
0
06 Mar 2008
1