ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01246
  4. Cited By
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 June 2018
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
    MIACV
    MIALM
ArXivPDFHTML

Papers citing "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"

50 / 465 papers shown
Title
On the Difficulty of Membership Inference Attacks
On the Difficulty of Membership Inference Attacks
Shahbaz Rezaei
Xin Liu
MIACV
22
13
0
27 May 2020
Revisiting Membership Inference Under Realistic Assumptions
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David Evans
24
147
0
21 May 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
33
83
0
18 May 2020
DAMIA: Leveraging Domain Adaptation as a Defense against Membership
  Inference Attacks
DAMIA: Leveraging Domain Adaptation as a Defense against Membership Inference Attacks
Hongwei Huang
Weiqi Luo
Guoqiang Zeng
J. Weng
Yue Zhang
Anjia Yang
AAML
15
24
0
16 May 2020
Defending Model Inversion and Membership Inference Attacks via
  Prediction Purification
Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Ziqi Yang
Bin Shao
Bohan Xuan
E. Chang
Fan Zhang
AAML
25
71
0
08 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
31
218
0
05 May 2020
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Xinjian Luo
Xiangqi Zhu
FedML
75
25
0
27 Apr 2020
Privacy in Deep Learning: A Survey
Privacy in Deep Learning: A Survey
Fatemehsadat Mirshghallah
Mohammadkazem Taram
Praneeth Vepakomma
Abhishek Singh
Ramesh Raskar
H. Esmaeilzadeh
FedML
19
135
0
25 Apr 2020
DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution
  Environments
DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments
Fan Mo
Ali Shahin Shamsabadi
Kleomenis Katevas
Soteris Demetriou
Ilias Leontiadis
Andrea Cavallaro
Hamed Haddadi
FedML
18
175
0
12 Apr 2020
Information Leakage in Embedding Models
Information Leakage in Embedding Models
Congzheng Song
A. Raghunathan
MIACV
24
262
0
31 Mar 2020
Learn to Forget: Machine Unlearning via Neuron Masking
Learn to Forget: Machine Unlearning via Neuron Masking
Yang Liu
Zhuo Ma
Ximeng Liu
Jian-wei Liu
Zhongyuan Jiang
Jianfeng Ma
Philip Yu
K. Ren
MU
22
61
0
24 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
196
360
0
24 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
45
271
0
07 Mar 2020
Membership Inference Attacks and Defenses in Classification Models
Membership Inference Attacks and Defenses in Classification Models
Jiacheng Li
Ninghui Li
Bruno Ribeiro
17
34
0
27 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
Approximate Data Deletion from Machine Learning Models
Approximate Data Deletion from Machine Learning Models
Zachary Izzo
Mary Anne Smart
Kamalika Chaudhuri
James Zou
MU
22
249
0
24 Feb 2020
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network
  Predictions
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions
Minghui Li
Sherman S. M. Chow
Shengshan Hu
Yuejing Yan
Minxin Du
Peng Kuang
6
45
0
22 Feb 2020
Data and Model Dependencies of Membership Inference Attack
Data and Model Dependencies of Membership Inference Attack
Shakila Mahjabin Tonni
Dinusha Vatsalan
F. Farokhi
Dali Kaafar
Zhigang Lu
Gioacchino Tangari
11
17
0
17 Feb 2020
Modelling and Quantifying Membership Information Leakage in Machine
  Learning
Modelling and Quantifying Membership Information Leakage in Machine Learning
F. Farokhi
M. Kâafar
AAML
FedML
MIACV
56
24
0
29 Jan 2020
Privacy for All: Demystify Vulnerability Disparity of Differential
  Privacy against Membership Inference Attack
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack
Bo Zhang
Ruotong Yu
Haipei Sun
Yanying Li
Jun Xu
Wendy Hui Wang
AAML
22
13
0
24 Jan 2020
On the Resilience of Biometric Authentication Systems against Random
  Inputs
On the Resilience of Biometric Authentication Systems against Random Inputs
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
M. Kâafar
AAML
39
23
0
13 Jan 2020
Membership Inference Attacks Against Object Detection Models
Membership Inference Attacks Against Object Detection Models
Yeachan Park
Myung-joo Kang
MIACV
29
6
0
12 Jan 2020
Privacy Attacks on Network Embeddings
Privacy Attacks on Network Embeddings
Michael Ellers
Michael Cochez
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
AAML
19
12
0
23 Dec 2019
Segmentations-Leak: Membership Inference Attacks and Defenses in
  Semantic Image Segmentation
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
Yang He
Shadi Rahimian
Bernt Schiele
Mario Fritz
MIACV
21
49
0
20 Dec 2019
Analyzing Information Leakage of Updates to Natural Language Models
Analyzing Information Leakage of Updates to Natural Language Models
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
Victor Rühle
Andrew Paverd
O. Ohrimenko
Boris Köpf
Marc Brockschmidt
ELM
MIACV
FedML
PILM
KELM
8
125
0
17 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAML
ELM
15
14
0
28 Nov 2019
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Mihailo Isakov
V. Gadepally
K. Gettings
Michel A. Kinsy
AAML
22
31
0
27 Nov 2019
Effects of Differential Privacy and Data Skewness on Membership
  Inference Vulnerability
Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability
Stacey Truex
Ling Liu
Mehmet Emre Gursoy
Wenqi Wei
Lei Yu
MIACV
29
46
0
21 Nov 2019
Privacy Leakage Avoidance with Switching Ensembles
Privacy Leakage Avoidance with Switching Ensembles
R. Izmailov
Peter Lin
Chris Mesterharm
S. Basu
25
2
0
18 Nov 2019
Revocable Federated Learning: A Benchmark of Federated Forest
Revocable Federated Learning: A Benchmark of Federated Forest
Yang Liu
Zhuo Ma
Ximeng Liu
Zhuzhu Wang
Siqi Ma
Ken Ren
FedML
MU
27
10
0
08 Nov 2019
Reducing audio membership inference attack accuracy to chance: 4
  defenses
Reducing audio membership inference attack accuracy to chance: 4 defenses
M. Lomnitz
Nina Lopatina
Paul Gamble
Z. Hampel-Arias
Lucas Tindall
Felipe A. Mejia
M. Barrios
AAML
17
0
0
31 Oct 2019
Quantifying (Hyper) Parameter Leakage in Machine Learning
Quantifying (Hyper) Parameter Leakage in Machine Learning
Vasisht Duddu
D. V. Rao
AAML
MIACV
FedML
36
5
0
31 Oct 2019
Fault Tolerance of Neural Networks in Adversarial Settings
Fault Tolerance of Neural Networks in Adversarial Settings
Vasisht Duddu
N. Pillai
D. V. Rao
V. Balas
SILM
AAML
27
11
0
30 Oct 2019
Robust Membership Encoding: Inference Attacks and Copyright Protection
  for Deep Learning
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
Congzheng Song
Reza Shokri
MIACV
21
5
0
27 Sep 2019
Alleviating Privacy Attacks via Causal Learning
Alleviating Privacy Attacks via Causal Learning
Shruti Tople
Amit Sharma
A. Nori
MIACV
OOD
33
32
0
27 Sep 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via
  Adversarial Examples
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
24
384
0
23 Sep 2019
Defending against Machine Learning based Inference Attacks via
  Adversarial Examples: Opportunities and Challenges
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAML
SILM
17
16
0
17 Sep 2019
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative
  Models
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
Dingfan Chen
Ning Yu
Yang Zhang
Mario Fritz
23
52
0
09 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural Networks
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAU
MIACV
39
372
0
03 Sep 2019
White-box vs Black-box: Bayes Optimal Strategies for Membership
  Inference
White-box vs Black-box: Bayes Optimal Strategies for Membership Inference
Alexandre Sablayrolles
Matthijs Douze
Yann Ollivier
Cordelia Schmid
Hervé Jégou
MIACV
31
352
0
29 Aug 2019
On Inferring Training Data Attributes in Machine Learning Models
On Inferring Training Data Attributes in Machine Learning Models
Benjamin Zi Hao Zhao
Hassan Jameel Asghar
Raghav Bhaskar
M. Kâafar
TDI
MIACV
20
11
0
28 Aug 2019
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box
  Membership Inference
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
Klas Leino
Matt Fredrikson
MIACV
50
267
0
27 Jun 2019
Adversarial training approach for local data debiasing
Adversarial training approach for local data debiasing
Ulrich Aïvodji
F. Bidet
Sébastien Gambs
Rosin Claude Ngueveu
Alain Tapp
11
7
0
19 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge
  Transfer
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
22
10
0
15 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to
  Privacy Attacks
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
27
18
0
15 Jun 2019
Disparate Vulnerability to Membership Inference Attacks
Disparate Vulnerability to Membership Inference Attacks
B. Kulynych
Mohammad Yaghini
Giovanni Cherubin
Michael Veale
Carmela Troncoso
13
39
0
02 Jun 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
The Audio Auditor: User-Level Membership Inference in Internet of Things
  Voice Services
The Audio Auditor: User-Level Membership Inference in Internet of Things Voice Services
Yuantian Miao
Minhui Xue
Chao Chen
Lei Pan
Jinchao Zhang
Benjamin Zi Hao Zhao
Dali Kaafar
Yang Xiang
19
34
0
17 May 2019
Language in Our Time: An Empirical Analysis of Hashtags
Language in Our Time: An Empirical Analysis of Hashtags
Yang Zhang
23
26
0
11 May 2019
Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data
  In Your Machine Translation System?
Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?
Sorami Hisamoto
Matt Post
Kevin Duh
MIACV
SLR
30
106
0
11 Apr 2019
Previous
123...1089
Next