Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.10595
Cited By
Systematic Evaluation of Privacy Risks of Machine Learning Models
24 March 2020
Liwei Song
Prateek Mittal
MIACV
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Systematic Evaluation of Privacy Risks of Machine Learning Models"
36 / 36 papers shown
Title
Unveiling Impact of Frequency Components on Membership Inference Attacks for Diffusion Models
Puwei Lian
Yujun Cai
Songze Li
50
0
0
27 May 2025
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Ruijun Deng
Zhihui Lu
Qiang Duan
FedML
106
0
0
14 Apr 2025
Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Ayana Moshruba
Ihsen Alouani
Maryam Parsa
AAML
72
3
0
24 Feb 2025
Rethinking Membership Inference Attacks Against Transfer Learning
Yanwei Yue
Jing Chen
Qianru Fang
Kun He
Ziming Zhao
Hao Ren
Guowen Xu
Yang Liu
Yang Xiang
91
34
0
20 Jan 2025
Trustworthiness of Stochastic Gradient Descent in Distributed Learning
Hongyang Li
Caesar Wu
Mohammed Chadli
Said Mammar
Pascal Bouvry
66
1
0
28 Oct 2024
SoK: Dataset Copyright Auditing in Machine Learning Systems
L. Du
Xuanru Zhou
M. Chen
Chusong Zhang
Zhou Su
Peng Cheng
Jiming Chen
Zhikun Zhang
MLAU
59
4
0
22 Oct 2024
Adversarial Attacks on Data Attribution
Xinhe Wang
Pingbang Hu
Junwei Deng
Jiaqi W. Ma
TDI
90
0
0
09 Sep 2024
FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher
Alessio Mora
Lorenzo Valerio
Paolo Bellavista
A. Passarella
FedML
MU
73
2
0
14 Aug 2024
AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI
Kaveen Hiniduma
Suren Byna
J. L. Bez
Ravi Madduri
67
7
0
27 Jun 2024
Convolutional Networks with Dense Connectivity
Gao Huang
Zhuang Liu
Geoff Pleiss
Laurens van der Maaten
Kilian Q. Weinberger
3DV
51
443
0
08 Jan 2020
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
55
386
0
23 Sep 2019
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
Dingfan Chen
Ning Yu
Yang Zhang
Mario Fritz
29
52
0
09 Sep 2019
Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection
Bingzhe Wu
Shiwan Zhao
Chaochao Chen
Haoyang Xu
Li Wang
Xiaolu Zhang
Guangyu Sun
Jun Zhou
34
45
0
21 Aug 2019
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
Klas Leino
Matt Fredrikson
MIACV
74
268
0
27 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
34
10
0
15 Jun 2019
Reconstruction and Membership Inference Attacks against Generative Models
Benjamin Hilprecht
Martin Härterich
Daniel Bernau
AAML
MIACV
37
186
0
07 Jun 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
27
237
0
24 May 2019
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
FedML
AAML
MIACV
37
250
0
01 Apr 2019
Machine Learning with Membership Privacy using Adversarial Regularization
Milad Nasr
Reza Shokri
Amir Houmansadr
FedML
MIACV
35
468
0
16 Jul 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACV
MIALM
74
935
0
04 Jun 2018
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis
Congzheng Song
Emiliano De Cristofaro
Vitaly Shmatikov
FedML
128
1,461
0
10 May 2018
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
D. Song
108
1,128
0
22 Feb 2018
Understanding Membership Inferences on Well-Generalized Learning Models
Yunhui Long
Vincent Bindschaedler
Lei Wang
Diyue Bu
Xiaofeng Wang
Haixu Tang
Carl A. Gunter
Kai Chen
MIALM
MIACV
29
224
0
13 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
159
3,171
0
01 Feb 2018
Machine Learning Models that Remember Too Much
Congzheng Song
Thomas Ristenpart
Vitaly Shmatikov
VLM
52
511
0
22 Sep 2017
Knock Knock, Who's There? Membership Inference on Aggregate Location Data
Apostolos Pyrgelis
Carmela Troncoso
Emiliano De Cristofaro
MIACV
90
270
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
225
11,962
0
19 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
103
1,851
0
20 May 2017
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Briland Hitaj
G. Ateniese
Fernando Perez-Cruz
FedML
107
1,385
0
24 Feb 2017
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
200
4,075
0
18 Oct 2016
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Nicolas Papernot
Martín Abadi
Ulfar Erlingsson
Ian Goodfellow
Kunal Talwar
52
1,012
0
18 Oct 2016
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
624
36,599
0
25 Aug 2016
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedML
SyDa
162
6,069
0
01 Jul 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
256
7,951
0
23 May 2016
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
920
99,991
0
04 Sep 2014
1