ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01246
  4. Cited By
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 June 2018
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
    MIACV
    MIALM
ArXivPDFHTML

Papers citing "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"

50 / 465 papers shown
Title
Defending Medical Image Diagnostics against Privacy Attacks using
  Generative Methods
Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
William Paul
Yinzhi Cao
Miaomiao Zhang
Philippe Burlina
AAML
MedIm
26
15
0
04 Mar 2021
DPlis: Boosting Utility of Differentially Private Deep Learning via
  Randomized Smoothing
DPlis: Boosting Utility of Differentially Private Deep Learning via Randomized Smoothing
Wenxiao Wang
Tianhao Wang
Lun Wang
Nanqing Luo
Pan Zhou
D. Song
R. Jia
8
16
0
02 Mar 2021
Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?
Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?
R. Guerraoui
Nirupam Gupta
Rafael Pinot
Sébastien Rouault
John Stephan
33
30
0
16 Feb 2021
Machine Learning Based Cyber Attacks Targeting on Controlled
  Information: A Survey
Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey
Yuantian Miao
Chao Chen
Lei Pan
Qing-Long Han
Jun Zhang
Yang Xiang
AAML
51
68
0
16 Feb 2021
Membership Inference Attacks are Easier on Difficult Problems
Membership Inference Attacks are Easier on Difficult Problems
Avital Shafran
Shmuel Peleg
Yedid Hoshen
MIACV
19
16
0
15 Feb 2021
Node-Level Membership Inference Attacks Against Graph Neural Networks
Node-Level Membership Inference Attacks Against Graph Neural Networks
Xinlei He
Rui Wen
Yixin Wu
Michael Backes
Yun Shen
Yang Zhang
21
93
0
10 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
On Utility and Privacy in Synthetic Genomic Data
On Utility and Privacy in Synthetic Genomic Data
Bristena Oprisanu
Georgi Ganev
Emiliano De Cristofaro
22
13
0
05 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Membership Inference Attack on Graph Neural Networks
Membership Inference Attack on Graph Neural Networks
Iyiola E. Olatunji
Wolfgang Nejdl
Megha Khosla
AAML
40
97
0
17 Jan 2021
Training Data Leakage Analysis in Language Models
Training Data Leakage Analysis in Language Models
Huseyin A. Inan
Osman Ramadan
Lukas Wutschitz
Daniel Jones
Victor Rühle
James Withers
Robert Sim
MIACV
PILM
37
9
0
14 Jan 2021
Model Extraction and Defenses on Generative Adversarial Networks
Model Extraction and Defenses on Generative Adversarial Networks
Hailong Hu
Jun Pang
SILM
MIACV
31
14
0
06 Jan 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential Comparisons
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
35
120
0
05 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
81
100
0
04 Jan 2021
Federated Unlearning
Federated Unlearning
Gaoyang Liu
Xiaoqiang Ma
Yang Yang
Chen Wang
Jiangchuan Liu
MU
43
53
0
27 Dec 2020
FedServing: A Federated Prediction Serving Framework Based on Incentive
  Mechanism
FedServing: A Federated Prediction Serving Framework Based on Incentive Mechanism
Jiasi Weng
Jian Weng
Hongwei Huang
Chengjun Cai
Cong Wang
FedML
19
28
0
19 Dec 2020
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
Seira Hidano
Takao Murakami
Yusuke Kawamoto
MIACV
30
13
0
30 Nov 2020
Use the Spear as a Shield: A Novel Adversarial Example based
  Privacy-Preserving Technique against Membership Inference Attacks
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks
Mingfu Xue
Chengxiang Yuan
Can He
Zhiyu Wu
Yushu Zhang
Zhe Liu
Weiqiang Liu
MIACV
6
12
0
27 Nov 2020
When Machine Learning Meets Privacy: A Survey and Outlook
When Machine Learning Meets Privacy: A Survey and Outlook
B. Liu
Ming Ding
Sina shaham
W. Rahayu
F. Farokhi
Zihuai Lin
20
282
0
24 Nov 2020
Synthetic Data -- Anonymisation Groundhog Day
Synthetic Data -- Anonymisation Groundhog Day
Theresa Stadler
Bristena Oprisanu
Carmela Troncoso
18
156
0
13 Nov 2020
On the Privacy Risks of Algorithmic Fairness
On the Privacy Risks of Algorithmic Fairness
Hong Chang
Reza Shokri
FaML
33
109
0
07 Nov 2020
FaceLeaks: Inference Attacks against Transfer Learning Models via
  Black-box Queries
FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries
Seng Pei Liew
Tsubasa Takahashi
MIACV
FedML
28
9
0
27 Oct 2020
Exploring the Security Boundary of Data Reconstruction via Neuron
  Exclusivity Analysis
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis
Xudong Pan
Mi Zhang
Yifan Yan
Jiaming Zhu
Zhemin Yang
AAML
13
21
0
26 Oct 2020
A Differentially Private Text Perturbation Method Using a Regularized
  Mahalanobis Metric
A Differentially Private Text Perturbation Method Using a Regularized Mahalanobis Metric
Zekun Xu
Abhinav Aggarwal
Oluwaseyi Feyisetan
Nathanael Teissier
19
55
0
22 Oct 2020
Feature Inference Attack on Model Predictions in Vertical Federated
  Learning
Feature Inference Attack on Model Predictions in Vertical Federated Learning
Xinjian Luo
Yuncheng Wu
Xiaokui Xiao
Beng Chin Ooi
FedML
AAML
11
219
0
20 Oct 2020
Image Obfuscation for Privacy-Preserving Machine Learning
Image Obfuscation for Privacy-Preserving Machine Learning
Mathilde Raynal
R. Achanta
Mathias Humbert
38
13
0
20 Oct 2020
Security and Privacy Considerations for Machine Learning Models Deployed
  in the Government and Public Sector (white paper)
Security and Privacy Considerations for Machine Learning Models Deployed in the Government and Public Sector (white paper)
Nader Sehatbakhsh
E. Daw
O. Savas
Amin Hassanzadeh
I. Mcculloh
SILM
14
1
0
12 Oct 2020
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural
  Networks
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
A. Salem
Michael Backes
Yang Zhang
8
35
0
07 Oct 2020
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep
  Learning
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Vasisht Duddu
A. Boutet
Virat Shejwalkar
GNN
24
4
0
02 Oct 2020
Quantifying Privacy Leakage in Graph Embedding
Quantifying Privacy Leakage in Graph Embedding
Vasisht Duddu
A. Boutet
Virat Shejwalkar
MIACV
17
119
0
02 Oct 2020
On Primes, Log-Loss Scores and (No) Privacy
On Primes, Log-Loss Scores and (No) Privacy
Abhinav Aggarwal
Zekun Xu
Oluwaseyi Feyisetan
Nathanael Teissier
MIACV
8
0
0
17 Sep 2020
Privacy Analysis of Deep Learning in the Wild: Membership Inference
  Attacks against Transfer Learning
Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
Yang Zou
Zhikun Zhang
Michael Backes
Yang Zhang
MIACV
17
32
0
10 Sep 2020
Local and Central Differential Privacy for Robustness and Privacy in
  Federated Learning
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
33
144
0
08 Sep 2020
A Comprehensive Analysis of Information Leakage in Deep Transfer
  Learning
A Comprehensive Analysis of Information Leakage in Deep Transfer Learning
Cen Chen
Bingzhe Wu
Minghui Qiu
Li Wang
Jun Zhou
PILM
16
10
0
04 Sep 2020
Enclave-Aware Compartmentalization and Secure Sharing with Sirius
Enclave-Aware Compartmentalization and Secure Sharing with Sirius
Zahra Tarkhani
Anil Madhavapeddy
6
2
0
03 Sep 2020
Sampling Attacks: Amplification of Membership Inference Attacks by
  Repeated Queries
Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries
Shadi Rahimian
Tribhuvanesh Orekondy
Mario Fritz
MIACV
13
25
0
01 Sep 2020
Against Membership Inference Attack: Pruning is All You Need
Against Membership Inference Attack: Pruning is All You Need
Yijue Wang
Chenghong Wang
Zigeng Wang
Shangli Zhou
Hang Liu
J. Bi
Caiwen Ding
Sanguthevar Rajasekaran
MIACV
17
48
0
28 Aug 2020
Not one but many Tradeoffs: Privacy Vs. Utility in Differentially
  Private Machine Learning
Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning
Benjamin Zi Hao Zhao
M. Kâafar
N. Kourtellis
13
26
0
20 Aug 2020
Data Minimization for GDPR Compliance in Machine Learning Models
Data Minimization for GDPR Compliance in Machine Learning Models
Abigail Goldsteen
Gilad Ezov
Ron Shmelkin
Micha Moffie
Ariel Farkash
8
63
0
06 Aug 2020
The Price of Tailoring the Index to Your Data: Poisoning Attacks on
  Learned Index Structures
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures
Evgenios M. Kornaropoulos
Silei Ren
R. Tamassia
AAML
16
17
0
01 Aug 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
34
237
0
30 Jul 2020
Label-Only Membership Inference Attacks
Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACV
MIALM
35
494
0
28 Jul 2020
Anonymizing Machine Learning Models
Anonymizing Machine Learning Models
Abigail Goldsteen
Gilad Ezov
Ron Shmelkin
Micha Moffie
Ariel Farkash
MIACV
19
5
0
26 Jul 2020
How Does Data Augmentation Affect Privacy in Machine Learning?
How Does Data Augmentation Affect Privacy in Machine Learning?
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
MU
26
1
0
21 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
Sharing Models or Coresets: A Study based on Membership Inference Attack
Sharing Models or Coresets: A Study based on Membership Inference Attack
Hanlin Lu
Changchang Liu
T. He
Shiqiang Wang
Kevin S. Chan
MIACV
FedML
19
15
0
06 Jul 2020
Reducing Risk of Model Inversion Using Privacy-Guided Training
Reducing Risk of Model Inversion Using Privacy-Guided Training
Abigail Goldsteen
Gilad Ezov
Ariel Farkash
30
4
0
29 Jun 2020
On the Effectiveness of Regularization Against Membership Inference
  Attacks
On the Effectiveness of Regularization Against Membership Inference Attacks
Yigitcan Kaya
Sanghyun Hong
Tudor Dumitras
40
27
0
09 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
19
127
0
05 Jun 2020
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
  Improvements
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
Xiaoyi Chen
A. Salem
Dingfan Chen
Michael Backes
Shiqing Ma
Qingni Shen
Zhonghai Wu
Yang Zhang
SILM
29
228
0
01 Jun 2020
Previous
123...10789
Next