ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.05820
  4. Cited By
Membership Inference Attacks against Machine Learning Models

Membership Inference Attacks against Machine Learning Models

18 October 2016
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
    SLR
    MIALM
    MIACV
ArXivPDFHTML

Papers citing "Membership Inference Attacks against Machine Learning Models"

50 / 2,058 papers shown
Title
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks
  Through Local Update Amplification
AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Zirui Gong
Liyue Shen
Yanjun Zhang
Leo Yu Zhang
Jingwei Wang
Guangdong Bai
Yong Xiang
AAML
41
6
0
13 Nov 2023
Preserving Node-level Privacy in Graph Neural Networks
Preserving Node-level Privacy in Graph Neural Networks
Zihang Xiang
Tianhao Wang
Di Wang
32
6
0
12 Nov 2023
Inference and Interference: The Role of Clipping, Pruning and Loss
  Landscapes in Differentially Private Stochastic Gradient Descent
Inference and Interference: The Role of Clipping, Pruning and Loss Landscapes in Differentially Private Stochastic Gradient Descent
Lauren Watson
Eric Gan
Mohan Dantam
Baharan Mirzasoleiman
Rik Sarkar
31
1
0
12 Nov 2023
Data Contamination Quiz: A Tool to Detect and Estimate Contamination in
  Large Language Models
Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models
Shahriar Golchin
Mihai Surdeanu
31
24
0
10 Nov 2023
Federated Experiment Design under Distributed Differential Privacy
Federated Experiment Design under Distributed Differential Privacy
Wei-Ning Chen
Graham Cormode
Akash Bharadwaj
Peter Romov
Ayfer Özgür
FedML
39
2
0
07 Nov 2023
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language
  Models
PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
Haoran Li
Dadi Guo
Donghao Li
Wei Fan
Qi Hu
Xin Liu
Chunkit Chan
Duanyi Yao
Yuan Yao
Yangqiu Song
PILM
42
24
0
07 Nov 2023
Preserving Privacy in GANs Against Membership Inference Attack
Preserving Privacy in GANs Against Membership Inference Attack
Mohammadhadi Shateri
Francisco Messina
Fabrice Labeau
Pablo Piantanida
30
4
0
06 Nov 2023
SoK: Memorisation in machine learning
SoK: Memorisation in machine learning
Dmitrii Usynin
Moritz Knolle
Georgios Kaissis
34
1
0
06 Nov 2023
Towards Machine Unlearning Benchmarks: Forgetting the Personal
  Identities in Facial Recognition Systems
Towards Machine Unlearning Benchmarks: Forgetting the Personal Identities in Facial Recognition Systems
Dasol Choi
Dongbin Na
CVBM
MU
29
16
0
03 Nov 2023
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware
  Approach
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach
Yuke Hu
Jian Lou
Jiaqi Liu
Wangze Ni
Feng Lin
Zhan Qin
Kui Ren
MU
40
12
0
03 Nov 2023
MIST: Defending Against Membership Inference Attacks Through
  Membership-Invariant Subspace Training
MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training
Jiacheng Li
Ninghui Li
Bruno Ribeiro
34
2
0
02 Nov 2023
Initialization Matters: Privacy-Utility Analysis of Overparameterized
  Neural Networks
Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
Jiayuan Ye
Zhenyu Zhu
Fanghui Liu
Reza Shokri
V. Cevher
42
12
0
31 Oct 2023
A Survey on Federated Unlearning: Challenges, Methods, and Future
  Directions
A Survey on Federated Unlearning: Challenges, Methods, and Future Directions
Ziyao Liu
Yu Jiang
Jiyuan Shen
Minyi Peng
Kwok-Yan Lam
Xingliang Yuan
Xiaoning Liu
MU
43
46
0
31 Oct 2023
Verification of Neural Networks Local Differential Classification
  Privacy
Verification of Neural Networks Local Differential Classification Privacy
Roie Reshef
Anan Kabaha
Olga Seleznova
Dana Drachsler-Cohen
AAML
42
2
0
31 Oct 2023
Generated Distributions Are All You Need for Membership Inference
  Attacks Against Generative Models
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models
Minxing Zhang
Ning Yu
Rui Wen
Michael Backes
Yang Zhang
DiffM
18
18
0
30 Oct 2023
Privacy-Preserving Federated Learning over Vertically and Horizontally
  Partitioned Data for Financial Anomaly Detection
Privacy-Preserving Federated Learning over Vertically and Horizontally Partitioned Data for Financial Anomaly Detection
S. Kadhe
Heiko Ludwig
Nathalie Baracaldo
Alan King
Yi Zhou
...
Ryo Kawahara
Nir Drucker
Hayim Shaul
Eyal Kushnir
Omri Soceanu
FedML
33
3
0
30 Oct 2023
Flow-based Distributionally Robust Optimization
Flow-based Distributionally Robust Optimization
Chen Xu
Jonghyeok Lee
Xiuyuan Cheng
Yao Xie
OOD
41
4
0
30 Oct 2023
Exploring Federated Unlearning: Review, Comparison, and Insights
Exploring Federated Unlearning: Review, Comparison, and Insights
Yang Zhao
Jiaxi Yang
Yiling Tao
Lixu Wang
Xiaoxiao Li
Dusit Niyato
H. Vincent Poor
FedML
MU
56
0
0
30 Oct 2023
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation
RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial Data Manipulation
Dzung Pham
Shreyas Kulkarni
Amir Houmansadr
35
0
0
29 Oct 2023
Where have you been? A Study of Privacy Risk for Point-of-Interest
  Recommendation
Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation
Kunlin Cai
Jinghuai Zhang
Zhiqing Hong
Will Shand
Guang Wang
Desheng Zhang
Jianfeng Chi
Yuan Tian
26
1
0
28 Oct 2023
Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable
  Machine Unlearning
Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning
Zheyuan Liu
Guangyao Dou
Yijun Tian
Chunhui Zhang
Eli Chien
Ziwei Zhu
MU
28
16
0
28 Oct 2023
BlackJack: Secure machine learning on IoT devices through hardware-based
  shuffling
BlackJack: Secure machine learning on IoT devices through hardware-based shuffling
Karthik Ganesan
Michal Fishkin
Ourong Lin
Natalie Enright Jerger
32
4
0
26 Oct 2023
Proving Test Set Contamination in Black Box Language Models
Proving Test Set Contamination in Black Box Language Models
Yonatan Oren
Nicole Meister
Niladri Chatterji
Faisal Ladhak
Tatsunori B. Hashimoto
HILM
32
133
0
26 Oct 2023
Detecting Pretraining Data from Large Language Models
Detecting Pretraining Data from Large Language Models
Weijia Shi
Anirudh Ajith
Mengzhou Xia
Yangsibo Huang
Daogao Liu
Terra Blevins
Danqi Chen
Luke Zettlemoyer
MIALM
33
166
0
25 Oct 2023
SoK: Memorization in General-Purpose Large Language Models
SoK: Memorization in General-Purpose Large Language Models
Valentin Hartmann
Anshuman Suri
Vincent Bindschaedler
David Evans
Shruti Tople
Robert West
KELM
LLMAG
29
21
0
24 Oct 2023
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Md. Rafi Ur Rashid
Vishnu Asutosh Dasu
Kang Gu
Najrin Sultana
Shagufta Mehnaz
AAML
FedML
49
10
0
24 Oct 2023
Quantum Federated Learning With Quantum Networks
Quantum Federated Learning With Quantum Networks
Tyler Wang
Huan-Hsin Tseng
Shinjae Yoo
34
6
0
23 Oct 2023
Did the Neurons Read your Book? Document-level Membership Inference for
  Large Language Models
Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models
Matthieu Meeus
Shubham Jain
Marek Rei
Yves-Alexandre de Montjoye
MIALM
34
29
0
23 Oct 2023
Enhancing Accuracy-Privacy Trade-off in Differentially Private Split
  Learning
Enhancing Accuracy-Privacy Trade-off in Differentially Private Split Learning
Ngoc Duy Pham
K. Phan
Naveen Chilamkurti
31
3
0
22 Oct 2023
MoPe: Model Perturbation-based Privacy Attacks on Language Models
MoPe: Model Perturbation-based Privacy Attacks on Language Models
Marvin Li
Jason Wang
Jeffrey G. Wang
Seth Neel
AAML
35
18
0
22 Oct 2023
Assessing Privacy Risks in Language Models: A Case Study on
  Summarization Tasks
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Ruixiang Tang
Gord Lueck
Rodolfo Quispe
Huseyin A. Inan
Janardhan Kulkarni
Xia Hu
34
6
0
20 Oct 2023
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Pablo Piantanida
MIACV
52
2
0
20 Oct 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
45
4
0
19 Oct 2023
Privacy Preserving Large Language Models: ChatGPT Case Study Based
  Vision and Framework
Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework
Imdad Ullah
Najm Hassan
S. Gill
Basem Suleiman
T. Ahanger
Zawar Shah
Junaid Qadir
S. Kanhere
45
16
0
19 Oct 2023
Black-Box Training Data Identification in GANs via Detector Networks
Black-Box Training Data Identification in GANs via Detector Networks
Lukman Olagoke
Salil P. Vadhan
Seth Neel
31
0
0
18 Oct 2023
Quantifying Privacy Risks of Prompts in Visual Prompt Learning
Quantifying Privacy Risks of Prompts in Visual Prompt Learning
Yixin Wu
Rui Wen
Michael Backes
Pascal Berrang
Mathias Humbert
Yun Shen
Yang Zhang
AAML
VPVLM
32
8
0
18 Oct 2023
Last One Standing: A Comparative Analysis of Security and Privacy of
  Soft Prompt Tuning, LoRA, and In-Context Learning
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Rui Wen
Tianhao Wang
Michael Backes
Yang Zhang
Ahmed Salem
AAML
27
10
0
17 Oct 2023
Passive Inference Attacks on Split Learning via Adversarial
  Regularization
Passive Inference Attacks on Split Learning via Adversarial Regularization
Xiaochen Zhu
Xinjian Luo
Yuncheng Wu
Yangfan Jiang
Xiaokui Xiao
Beng Chin Ooi
FedML
32
9
0
16 Oct 2023
A Comprehensive Study of Privacy Risks in Curriculum Learning
A Comprehensive Study of Privacy Risks in Curriculum Learning
Joann Qiongna Chen
Xinlei He
Zheng Li
Yang Zhang
Zhou Li
56
2
0
16 Oct 2023
DPZero: Private Fine-Tuning of Language Models without Backpropagation
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang
Bingcong Li
K. K. Thekumparampil
Sewoong Oh
Niao He
33
11
0
14 Oct 2023
Large Language Model Unlearning
Large Language Model Unlearning
Yuanshun Yao
Xiaojun Xu
Yang Liu
MU
43
112
0
14 Oct 2023
User Inference Attacks on Large Language Models
User Inference Attacks on Large Language Models
Nikhil Kandpal
Krishna Pillutla
Alina Oprea
Peter Kairouz
Christopher A. Choquette-Choo
Zheng Xu
SILM
AAML
46
15
0
13 Oct 2023
When Machine Learning Models Leak: An Exploration of Synthetic Training
  Data
When Machine Learning Models Leak: An Exploration of Synthetic Training Data
Manel Slokom
Peter-Paul de Wolf
Martha Larson
MIACV
43
1
0
12 Oct 2023
Security Considerations in AI-Robotics: A Survey of Current Methods,
  Challenges, and Opportunities
Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Shaswata Mitra
Ivan A. Fernandez
Swayamjit Saha
Sudip Mittal
Jingdao Chen
Nisha Pillai
Shahram Rahimi
29
12
0
12 Oct 2023
Defending Our Privacy With Backdoors
Defending Our Privacy With Backdoors
Dominik Hintersdorf
Lukas Struppek
Daniel Neider
Kristian Kersting
SILM
AAML
33
2
0
12 Oct 2023
Why Train More? Effective and Efficient Membership Inference via
  Memorization
Why Train More? Effective and Efficient Membership Inference via Memorization
Jihye Choi
Shruti Tople
Varun Chandrasekaran
Somesh Jha
TDI
FedML
26
2
0
12 Oct 2023
In-Context Unlearning: Language Models as Few Shot Unlearners
In-Context Unlearning: Language Models as Few Shot Unlearners
Martin Pawelczyk
Seth Neel
Himabindu Lakkaraju
MU
30
105
0
11 Oct 2023
Histopathological Image Classification and Vulnerability Analysis using
  Federated Learning
Histopathological Image Classification and Vulnerability Analysis using Federated Learning
Sankalp Vyas
Amar Nath Patra
R. Shukla
33
3
0
11 Oct 2023
Improved Membership Inference Attacks Against Language Classification
  Models
Improved Membership Inference Attacks Against Language Classification Models
Shlomit Shachor
N. Razinkov
Abigail Goldsteen
39
5
0
11 Oct 2023
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
  Partition for On-Device ML
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML
Ziqi Zhang
Chen Gong
Yifeng Cai
Yuanyuan Yuan
Bingyan Liu
Ding Li
Yao Guo
Xiangqun Chen
FedML
37
16
0
11 Oct 2023
Previous
123...121314...404142
Next