ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.05820
  4. Cited By
Membership Inference Attacks against Machine Learning Models

Membership Inference Attacks against Machine Learning Models

18 October 2016
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
    SLR
    MIALM
    MIACV
ArXivPDFHTML

Papers citing "Membership Inference Attacks against Machine Learning Models"

50 / 2,058 papers shown
Title
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield
  but Also a Catalyst for Model Inversion Attacks
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
30
12
0
10 Oct 2023
Domain Watermark: Effective and Harmless Dataset Copyright Protection is
  Closed at Hand
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
32
50
0
09 Oct 2023
FedFed: Feature Distillation against Data Heterogeneity in Federated
  Learning
FedFed: Feature Distillation against Data Heterogeneity in Federated Learning
Zhiqin Yang
Yonggang Zhang
Yuxiang Zheng
Xinmei Tian
Hao Peng
Tongliang Liu
Bo Han
FedML
32
62
0
08 Oct 2023
Big Data Privacy in Emerging Market Fintech and Financial Services: A
  Research Agenda
Big Data Privacy in Emerging Market Fintech and Financial Services: A Research Agenda
J. Blumenstock
Nitin Kohli
34
5
0
08 Oct 2023
Privacy-Preserving Financial Anomaly Detection via Federated Learning &
  Multi-Party Computation
Privacy-Preserving Financial Anomaly Detection via Federated Learning & Multi-Party Computation
Sunpreet S. Arora
Andrew Beams
Panagiotis Chatzigiannis
Sebastian Meiser
Karan Patel
...
Harshal Shah
Yizhen Wang
Yuhang Wu
Hao Yang
Mahdi Zamani
FedML
18
3
0
06 Oct 2023
A Survey of Data Security: Practices from Cybersecurity and Challenges
  of Machine Learning
A Survey of Data Security: Practices from Cybersecurity and Challenges of Machine Learning
Padmaksha Roy
Jaganmohan Chandrasekaran
Erin Lanus
Laura J. Freeman
Jeremy Werner
30
3
0
06 Oct 2023
From Zero to Hero: Detecting Leaked Data through Synthetic Data
  Injection and Model Querying
From Zero to Hero: Detecting Leaked Data through Synthetic Data Injection and Model Querying
Biao Wu
Qiang Huang
Anthony K. H. Tung
13
0
0
06 Oct 2023
Chameleon: Increasing Label-Only Membership Leakage with Adaptive
  Poisoning
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Harsh Chaudhari
Giorgio Severi
Alina Oprea
Jonathan R. Ullman
38
4
0
05 Oct 2023
Adversarial Machine Learning for Social Good: Reframing the Adversary as
  an Ally
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
Shawqi Al-Maliki
Adnan Qayyum
Hassan Ali
M. Abdallah
Junaid Qadir
D. Hoang
Dusit Niyato
Ala I. Al-Fuqaha
AAML
39
3
0
05 Oct 2023
PrIeD-KIE: Towards Privacy Preserved Document Key Information Extraction
PrIeD-KIE: Towards Privacy Preserved Document Key Information Extraction
S. Saifullah
S. Agne
Andreas Dengel
Sheraz Ahmed
18
0
0
05 Oct 2023
How Much Training Data is Memorized in Overparameterized Autoencoders?
  An Inverse Problem Perspective on Memorization Evaluation
How Much Training Data is Memorized in Overparameterized Autoencoders? An Inverse Problem Perspective on Memorization Evaluation
Koren Abitbul
Yehuda Dar
TDI
30
2
0
04 Oct 2023
FLEDGE: Ledger-based Federated Learning Resilient to Inference and
  Backdoor Attacks
FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
Jorge Castillo
Phillip Rieger
Hossein Fereidooni
Qian Chen
Ahmad Sadeghi
FedML
AAML
43
8
0
03 Oct 2023
Coupling public and private gradient provably helps optimization
Coupling public and private gradient provably helps optimization
Ruixuan Liu
Zhiqi Bu
Yu Wang
Sheng Zha
George Karypis
44
2
0
02 Oct 2023
Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in
  Code Models
Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code Models
Zhou Yang
Zhipeng Zhao
Chenyu Wang
Jieke Shi
Dongsum Kim
Donggyun Han
David Lo
SILM
AAML
MIACV
45
12
0
02 Oct 2023
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
  Against Adversarial Attacks
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li
Bin Xie
Songtao Guo
Yuanyuan Yang
Bin Xiao
AAML
42
16
0
01 Oct 2023
On Memorization and Privacy Risks of Sharpness Aware Minimization
On Memorization and Privacy Risks of Sharpness Aware Minimization
Young In Kim
Pratiksha Agrawal
J. Royset
Rajiv Khanna
FedML
41
3
0
30 Sep 2023
FedLPA: One-shot Federated Learning with Layer-Wise Posterior
  Aggregation
FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation
Xiang Liu
Liangxi Liu
Feiyang Ye
Yunheng Shen
Xia Li
Linshan Jiang
Jialin Li
46
4
0
30 Sep 2023
Beyond Random Noise: Insights on Anonymization Strategies from a Latent
  Bandit Study
Beyond Random Noise: Insights on Anonymization Strategies from a Latent Bandit Study
Alexander Galozy
Sadi Alawadi
V. Kebande
Sławomir Nowaczyk
18
1
0
30 Sep 2023
Source Inference Attacks: Beyond Membership Inference Attacks in
  Federated Learning
Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
Hongsheng Hu
Xuyun Zhang
Z. Salcic
Lichao Sun
K. Choo
Gillian Dobbie
23
16
0
30 Sep 2023
Practical Membership Inference Attacks Against Large-Scale Multi-Modal
  Models: A Pilot Study
Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study
Myeongseob Ko
Ming Jin
Chenguang Wang
Ruoxi Jia
35
27
0
29 Sep 2023
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending
  Against Extraction Attacks
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
Vaidehi Patil
Peter Hase
Joey Tianyi Zhou
KELM
AAML
31
100
0
29 Sep 2023
Leave-one-out Distinguishability in Machine Learning
Leave-one-out Distinguishability in Machine Learning
Jiayuan Ye
Anastasia Borovykh
Soufiane Hayou
Reza Shokri
39
10
0
29 Sep 2023
Recent Advances of Differential Privacy in Centralized Deep Learning: A
  Systematic Survey
Recent Advances of Differential Privacy in Centralized Deep Learning: A Systematic Survey
Lea Demelius
Roman Kern
Andreas Trügler
SyDa
FedML
36
6
0
28 Sep 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models:
  A Survey
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
37
24
0
27 Sep 2023
Critical Infrastructure Security Goes to Space: Leveraging Lessons
  Learned on the Ground
Critical Infrastructure Security Goes to Space: Leveraging Lessons Learned on the Ground
Tim Ellis
Briland Hitaj
Ulf Lindqvist
Deborah Shands
L. Tinnel
Bruce DeBruhl
23
0
0
26 Sep 2023
Evaluating the Usability of Differential Privacy Tools with Data
  Practitioners
Evaluating the Usability of Differential Privacy Tools with Data Practitioners
Ivoline C. Ngong
Brad Stenger
Joseph P. Near
Yuanyuan Feng
29
12
0
24 Sep 2023
DeepTheft: Stealing DNN Model Architectures through Power Side Channel
DeepTheft: Stealing DNN Model Architectures through Power Side Channel
Yansong Gao
Huming Qiu
Zhi-Li Zhang
Binghui Wang
Hua Ma
A. Abuadbba
Minhui Xue
Anmin Fu
Surya Nepal
MLAU
FedML
43
12
0
21 Sep 2023
Privacy-Preserving In-Context Learning with Differentially Private
  Few-Shot Generation
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Xinyu Tang
Richard Shin
Huseyin A. Inan
Andre Manoel
Fatemehsadat Mireshghallah
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Robert Sim
48
55
0
21 Sep 2023
Information Leakage from Data Updates in Machine Learning Models
Information Leakage from Data Updates in Machine Learning Models
Tian Hui
Farhad Farokhi
Olga Ohrimenko
FedML
AAML
KELM
MIACV
59
1
0
20 Sep 2023
DPpack: An R Package for Differentially Private Statistical Analysis and
  Machine Learning
DPpack: An R Package for Differentially Private Statistical Analysis and Machine Learning
S. Giddens
F. Liu
38
1
0
19 Sep 2023
Model Leeching: An Extraction Attack Targeting LLMs
Model Leeching: An Extraction Attack Targeting LLMs
Lewis Birch
William Hackett
Stefan Trawicki
N. Suri
Peter Garraghan
37
13
0
19 Sep 2023
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in
  Machine Unlearning Services
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services
Hongsheng Hu
Shuo Wang
Jiamin Chang
Haonan Zhong
Ruoxi Sun
Shuang Hao
Haojin Zhu
Minhui Xue
MU
23
26
0
15 Sep 2023
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker
  Recognition Systems
SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems
Guangke Chen
Yedi Zhang
Fu Song
43
8
0
14 Sep 2023
Your Code Secret Belongs to Me: Neural Code Completion Tools Can
  Memorize Hard-Coded Credentials
Your Code Secret Belongs to Me: Neural Code Completion Tools Can Memorize Hard-Coded Credentials
Yizhan Huang
Yichen Li
Weibin Wu
Jianping Zhang
Michael R. Lyu
31
14
0
14 Sep 2023
DP-Forward: Fine-tuning and Inference on Language Models with
  Differential Privacy in Forward Pass
DP-Forward: Fine-tuning and Inference on Language Models with Differential Privacy in Forward Pass
Minxin Du
Xiang Yue
Sherman S. M. Chow
Tianhao Wang
Chenyu Huang
Huan Sun
SILM
37
59
0
13 Sep 2023
Level Up: Private Non-Interactive Decision Tree Evaluation using
  Levelled Homomorphic Encryption
Level Up: Private Non-Interactive Decision Tree Evaluation using Levelled Homomorphic Encryption
Rasoul Akhavan Mahdavi
Haoyan Ni
Dimitry Linkov
Florian Kerschbaum
13
13
0
12 Sep 2023
Fingerprint Attack: Client De-Anonymization in Federated Learning
Fingerprint Attack: Client De-Anonymization in Federated Learning
Qiongkai Xu
Trevor Cohn
Olga Ohrimenko
FedML
34
2
0
12 Sep 2023
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning
  System
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
Peixin Zhang
Jun Sun
Mingtian Tan
Xinyu Wang
AAML
37
4
0
12 Sep 2023
Privacy Side Channels in Machine Learning Systems
Privacy Side Channels in Machine Learning Systems
Edoardo Debenedetti
Giorgio Severi
Nicholas Carlini
Christopher A. Choquette-Choo
Matthew Jagielski
Milad Nasr
Eric Wallace
Florian Tramèr
MIALM
51
38
0
11 Sep 2023
A supervised generative optimization approach for tabular data
A supervised generative optimization approach for tabular data
S. Nakamura-Sakai
Fadi Hamad
Saheed O. Obitayo
Vamsi K. Potluru
25
2
0
10 Sep 2023
Robust Representation Learning for Privacy-Preserving Machine Learning:
  A Multi-Objective Autoencoder Approach
Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach
Sofiane Ouaari
Ali Burak Ünal
Mete Akgün
Nícolas Pfeifer
37
0
0
08 Sep 2023
Trustworthy and Synergistic Artificial Intelligence for Software
  Engineering: Vision and Roadmaps
Trustworthy and Synergistic Artificial Intelligence for Software Engineering: Vision and Roadmaps
David Lo
39
39
0
08 Sep 2023
Byzantine-Robust Federated Learning with Variance Reduction and
  Differential Privacy
Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy
Zikai Zhang
Rui Hu
41
11
0
07 Sep 2023
Blink: Link Local Differential Privacy in Graph Neural Networks via
  Bayesian Estimation
Blink: Link Local Differential Privacy in Graph Neural Networks via Bayesian Estimation
Xiaochen Zhu
Vincent Y. F. Tan
Xiaokui Xiao
32
9
0
06 Sep 2023
ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning
ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning
L. Du
Min Chen
Mingyang Sun
Shouling Ji
Peng Cheng
Jiming Chen
Zhikun Zhang
OffRL
53
8
0
06 Sep 2023
Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative
  Inference Framework for Deep Learning Classification Tasks
Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks
Jingyi Li
Guocheng Liao
Lin Chen
Xu Chen
40
8
0
06 Sep 2023
The Adversarial Implications of Variable-Time Inference
The Adversarial Implications of Variable-Time Inference
Dudi Biton
Aditi Misra
Efrat Levy
J. Kotak
Ron Bitton
R. Schuster
Nicolas Papernot
Yuval Elovici
Ben Nassi
AAML
20
0
0
05 Sep 2023
A Blackbox Model Is All You Need to Breach Privacy: Smart Grid
  Forecasting Models as a Use Case
A Blackbox Model Is All You Need to Breach Privacy: Smart Grid Forecasting Models as a Use Case
Hussein A. Aly
Abdulaziz Al-Ali
Abdulla Al-Ali
Q. Malluhi
16
2
0
04 Sep 2023
SemProtector: A Unified Framework for Semantic Protection in Deep
  Learning-based Semantic Communication Systems
SemProtector: A Unified Framework for Semantic Protection in Deep Learning-based Semantic Communication Systems
Xinghan Liu
Gu Nan
Qimei Cui
Zeju Li
Peiyuan Liu
Zebin Xing
Hanqing Mu
Xiaofeng Tao
Tony Q. S. Quek
AAML
32
13
0
04 Sep 2023
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
  Applications
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Yi Zhang
Yuying Zhao
Zhaoqing Li
Xueqi Cheng
Yu-Chiang Frank Wang
Olivera Kotevska
Philip S. Yu
Tyler Derr
31
10
0
31 Aug 2023
Previous
123...131415...404142
Next