Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1709.07886
Cited By
Machine Learning Models that Remember Too Much
22 September 2017
Congzheng Song
Thomas Ristenpart
Vitaly Shmatikov
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Machine Learning Models that Remember Too Much"
50 / 217 papers shown
Title
A Comprehensive Survey on Local Differential Privacy Toward Data Statistics and Analysis
Teng Wang
Xuefeng Zhang
Xuefeng Zhang
Xinyu Yang
14
86
0
11 Oct 2020
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Vasisht Duddu
A. Boutet
Virat Shejwalkar
GNN
21
4
0
02 Oct 2020
Quantifying Privacy Leakage in Graph Embedding
Vasisht Duddu
A. Boutet
Virat Shejwalkar
MIACV
17
119
0
02 Oct 2020
A Systematic Review on Model Watermarking for Neural Networks
Franziska Boenisch
AAML
11
64
0
25 Sep 2020
An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks
Sumit Kumar Jha
Susmit Jha
Rickard Ewetz
Sunny Raj
Alvaro Velasquez
L. Pullum
A. Swami
MIACV
8
8
0
17 Sep 2020
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
33
144
0
08 Sep 2020
Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries
Shadi Rahimian
Tribhuvanesh Orekondy
Mario Fritz
MIACV
11
25
0
01 Sep 2020
A non-discriminatory approach to ethical deep learning
Enzo Tartaglione
Marco Grangetto
21
3
0
04 Aug 2020
Privacy-preserving Voice Analysis via Disentangled Representations
Ranya Aloufi
Hamed Haddadi
David E. Boyle
DRL
26
58
0
29 Jul 2020
ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning
S. K. Murakonda
Reza Shokri
6
73
0
18 Jul 2020
Less is More: A privacy-respecting Android malware classifier using Federated Learning
Rafa Gálvez
Veelasha Moonsamy
Claudia Díaz
FedML
6
30
0
16 Jul 2020
Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent
Lauren Watson
Benedek Rozemberczki
Rik Sarkar
16
0
0
25 Jun 2020
Large image datasets: A pyrrhic win for computer vision?
Vinay Uday Prabhu
Abeba Birhane
28
358
0
24 Jun 2020
On the Difficulty of Membership Inference Attacks
Shahbaz Rezaei
Xin Liu
MIACV
22
13
0
27 May 2020
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
27
83
0
18 May 2020
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
31
218
0
05 May 2020
Enhancing Privacy via Hierarchical Federated Learning
A. Wainakh
Alejandro Sánchez Guinea
Tim Grube
M. Mühlhäuser
FedML
28
45
0
23 Apr 2020
Private Knowledge Transfer via Model Distillation with Generative Adversarial Networks
Di Gao
Cheng Zhuo
6
4
0
05 Apr 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
359
0
24 Mar 2020
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations
Aditya Golatkar
Alessandro Achille
Stefano Soatto
MU
OOD
22
189
0
05 Mar 2020
Formalizing Data Deletion in the Context of the Right to be Forgotten
Sanjam Garg
S. Goldwasser
Prashant Nalini Vasudevan
AILaw
MU
43
82
0
25 Feb 2020
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
Shawn Shan
Emily Wenger
Jiayun Zhang
Huiying Li
Haitao Zheng
Ben Y. Zhao
PICV
MU
27
24
0
19 Feb 2020
The Differentially Private Lottery Ticket Mechanism
Lovedeep Gondara
Ke Wang
Ricardo Silva Carvalho
6
3
0
16 Feb 2020
Salvaging Federated Learning by Local Adaptation
Tao Yu
Eugene Bagdasaryan
Vitaly Shmatikov
FedML
25
260
0
12 Feb 2020
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAML
ELM
15
14
0
28 Nov 2019
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
Min Du
R. Jia
D. Song
AAML
27
175
0
16 Nov 2019
CHEETAH: An Ultra-Fast, Approximation-Free, and Privacy-Preserved Neural Network Framework based on Joint Obscure Linear and Nonlinear Computations
Qiao Zhang
Cong Wang
Chunsheng Xin
Hongyi Wu
13
4
0
12 Nov 2019
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks
Aditya Golatkar
Alessandro Achille
Stefano Soatto
CLL
MU
11
471
0
12 Nov 2019
Theoretical Guarantees for Model Auditing with Finite Adversaries
Mario Díaz
Peter Kairouz
Jiachun Liao
Lalitha Sankar
MLAU
AAML
34
2
0
08 Nov 2019
DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning
Frederik Harder
Jonas Köhler
Max Welling
Mijung Park
15
0
0
15 Oct 2019
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
Congzheng Song
Reza Shokri
MIACV
16
5
0
27 Sep 2019
That which we call private
Ulfar Erlingsson
Ilya Mironov
A. Raghunathan
Shuang Song
13
26
0
08 Aug 2019
Local Differential Privacy for Deep Learning
Pathum Chamikara Mahawaga Arachchige
P. Bertók
I. Khalil
Dongxi Liu
S. Çamtepe
Mohammed Atiquzzaman
41
220
0
08 Aug 2019
Secure Architectures Implementing Trusted Coalitions for Blockchained Distributed Learning (TCLearn)
S. Lugan
P. Desbordes
Luis Xavier Ramos Tormo
Axel Legay
Benoit Macq
FedML
20
31
0
18 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
22
10
0
15 Jun 2019
Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Ziqi Yang
Hung Dang
E. Chang
AAML
27
34
0
14 Jun 2019
Interpretable and Differentially Private Predictions
Frederik Harder
Matthias Bauer
Mijung Park
FAtt
14
52
0
05 Jun 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Real numbers, data science and chaos: How to fit any dataset with a single parameter
L. Boué
6
2
0
28 Apr 2019
How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning
Xinlei Pan
Weiyao Wang
Xiaoshuai Zhang
Bo-wen Li
Jinfeng Yi
D. Song
MIACV
69
26
0
24 Apr 2019
Differentially Private Model Publishing for Deep Learning
Lei Yu
Ling Liu
C. Pu
Mehmet Emre Gursoy
Stacey Truex
FedML
15
264
0
03 Apr 2019
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
FedML
AAML
MIACV
17
250
0
01 Apr 2019
STYLE-ANALYZER: fixing code style inconsistencies with interpretable unsupervised algorithms
Vadim Markovtsev
Waren Long
Hugo Mougard
Konstantin Slavnov
Egor Bulychev
14
20
0
01 Apr 2019
Copying Machine Learning Classifiers
Irene Unceta
Jordi Nin
O. Pujol
14
18
0
05 Mar 2019
TamperNN: Efficient Tampering Detection of Deployed Neural Nets
Erwan Le Merrer
Gilles Tredan
MLAU
AAML
6
9
0
01 Mar 2019
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Ziqi Yang
E. Chang
Zhenkai Liang
MLAU
33
60
0
22 Feb 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAML
FedML
19
74
0
08 Jan 2019
Reaching Data Confidentiality and Model Accountability on the CalTrain
Zhongshu Gu
Hani Jamjoom
D. Su
Heqing Huang
Jialong Zhang
Tengfei Ma
Dimitrios E. Pendarakis
Ian Molloy
FedML
13
15
0
07 Dec 2018
Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase
Jianfeng Chi
Emmanuel Owusu
Xuwang Yin
Tong Yu
William Chan
P. Tague
Yuan Tian
FedML
19
28
0
07 Dec 2018
Differentially Private Fair Learning
Matthew Jagielski
Michael Kearns
Jieming Mao
Alina Oprea
Aaron Roth
Saeed Sharifi-Malvajerdi
Jonathan R. Ullman
FaML
FedML
19
147
0
06 Dec 2018
Previous
1
2
3
4
5
Next