Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2007.14321
Cited By
Label-Only Membership Inference Attacks
28 July 2020
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACV
MIALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Label-Only Membership Inference Attacks"
50 / 115 papers shown
Title
Nosy Layers, Noisy Fixes: Tackling DRAs in Federated Learning Systems using Explainable AI
Meghali Nandi
Arash Shaghaghi
Nazatul Haque Sultan
Gustavo Batista
Raymond K. Zhao
Sanjay Jha
AAML
12
0
0
16 May 2025
Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning
Francesco Diana
André Nusser
Chuan Xu
Giovanni Neglia
27
0
0
15 May 2025
On the Account Security Risks Posed by Password Strength Meters
Ming Xu
Weili Han
Jitao Yu
Jing Liu
Xinsong Zhang
Yun Lin
Jin Song Dong
34
0
0
13 May 2025
Threat Modeling for AI: The Case for an Asset-Centric Approach
Jose Sanchez Vicarte
Marcin Spoczynski
Mostafa Elsaid
34
0
0
08 May 2025
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Sonal Allana
Mohan Kankanhalli
Rozita Dara
35
0
0
05 May 2025
DeSIA: Attribute Inference Attacks Against Limited Fixed Aggregate Statistics
Yifeng Mao
Bozhidar Stevanoski
Yves-Alexandre de Montjoye
52
0
0
25 Apr 2025
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
Chengkun Wei
Weixian Li
Chen Gong
Wenzhi Chen
60
0
0
29 Mar 2025
Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers
Huan Tian
Guangsheng Zhang
Bo Liu
Tianqing Zhu
Ming Ding
Wanlei Zhou
58
0
0
08 Mar 2025
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Yu He
Boheng Li
L. Liu
Zhongjie Ba
Wei Dong
Yiming Li
Zhanyue Qin
Kui Ren
Chong Chen
MIALM
74
0
0
26 Feb 2025
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training
Jaydeep Borkar
Matthew Jagielski
Katherine Lee
Niloofar Mireshghallah
David A. Smith
Christopher A. Choquette-Choo
PILM
85
1
0
24 Feb 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
85
0
0
24 Feb 2025
Guarding the Privacy of Label-Only Access to Neural Network Classifiers via iDP Verification
Anan Kabaha
Dana Drachsler-Cohen
AAML
50
0
0
23 Feb 2025
Rethinking Membership Inference Attacks Against Transfer Learning
Yanwei Yue
Jing Chen
Qianru Fang
Kun He
Ziming Zhao
Hao Ren
Guowen Xu
Yang Liu
Yang Xiang
66
34
0
20 Jan 2025
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Jiadong Lou
Xu Yuan
Rui Zhang
Xingliang Yuan
Neil Gong
N. Tzeng
AAML
45
1
0
19 Jan 2025
Understanding and Mitigating Membership Inference Risks of Neural Ordinary Differential Equations
Sanghyun Hong
Fan Wu
A. Gruber
Kookjin Lee
47
0
0
12 Jan 2025
Synthetic Data Privacy Metrics
Amy Steier
Lipika Ramaswamy
Andre Manoel
Alexa Haushalter
53
0
0
08 Jan 2025
Gradients Stand-in for Defending Deep Leakage in Federated Learning
H. Yi
H. Ren
C. Hu
Y. Li
J. Deng
Xin Xie
FedML
35
0
0
11 Oct 2024
Ward: Provable RAG Dataset Inference via LLM Watermarks
Nikola Jovanović
Robin Staab
Maximilian Baader
Martin Vechev
199
1
0
04 Oct 2024
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
Charles Meyers
Mohammad Reza Saleh Sedghpour
Tommy Löfstedt
Erik Elmroth
OOD
AAML
33
0
0
11 Sep 2024
Membership Inference Attack Against Masked Image Modeling
Zehan Li
Xinlei He
Ning Yu
Yang Zhang
44
1
0
13 Aug 2024
Range Membership Inference Attacks
Jiashu Tao
Reza Shokri
48
1
0
09 Aug 2024
Differentially Private Block-wise Gradient Shuffle for Deep Learning
Zilong Zhang
FedML
45
0
0
31 Jul 2024
Fingerprint Membership and Identity Inference Against Generative Adversarial Networks
Saverio Cavasin
Daniele Mari
Simone Milani
Mauro Conti
AAML
34
3
0
21 Jun 2024
Label Smoothing Improves Machine Unlearning
Zonglin Di
Zhaowei Zhu
Jinghan Jia
Jiancheng Liu
Zafar Takhirov
Bo Jiang
Yuanshun Yao
Sijia Liu
Yang Liu
42
2
0
11 Jun 2024
OSLO: One-Shot Label-Only Membership Inference Attacks
Yuefeng Peng
Jaechul Roh
Subhransu Maji
Amir Houmansadr
44
0
0
27 May 2024
Efficient Knowledge Deletion from Trained Models through Layer-wise Partial Machine Unlearning
Vinay Chakravarthi Gogineni
E. Nadimi
MU
31
1
0
12 Mar 2024
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
49
2
0
07 Dec 2023
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
Yue Niu
Ramy E. Ali
Saurav Prakash
Salman Avestimehr
FedML
38
2
0
05 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
68
3
0
20 Nov 2023
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Ruixiang Tang
Gord Lueck
Rodolfo Quispe
Huseyin A. Inan
Janardhan Kulkarni
Xia Hu
31
6
0
20 Oct 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
42
4
0
19 Oct 2023
Defending Our Privacy With Backdoors
Dominik Hintersdorf
Lukas Struppek
Daniel Neider
Kristian Kersting
SILM
AAML
26
2
0
12 Oct 2023
White-box Membership Inference Attacks against Diffusion Models
Yan Pang
Tianhao Wang
Xu Kang
Mengdi Huai
Yang Zhang
AAML
DiffM
48
22
0
11 Aug 2023
Latent Code Augmentation Based on Stable Diffusion for Data-free Substitute Attacks
Mingwen Shao
Lingzhuang Meng
Yuanjian Qiao
Lixu Zhang
W. Zuo
DiffM
29
0
0
24 Jul 2023
FFPDG: Fast, Fair and Private Data Generation
Weijie Xu
Jinjin Zhao
Francis Iannacci
Bo Wang
36
11
0
30 Jun 2023
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
Adel M. Elmahdy
A. Salem
SILM
25
6
0
23 Jun 2023
Membership inference attack with relative decision boundary distance
Jiacheng Xu
Chengxiang Tan
26
1
0
07 Jun 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
37
46
0
25 May 2023
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Harsha Nori
Sergey Yekhanin
43
37
0
24 May 2023
Selective Pre-training for Private Fine-tuning
Da Yu
Sivakanth Gopi
Janardhan Kulkarni
Zinan Lin
Saurabh Naik
Tomasz Religa
Jian Yin
Huishuai Zhang
43
19
0
23 May 2023
Gradient Leakage Defense with Key-Lock Module for Federated Learning
Hanchi Ren
Jingjing Deng
Xianghua Xie
Xiaoke Ma
Jianfeng Ma
FedML
37
2
0
06 May 2023
A Survey on Secure and Private Federated Learning Using Blockchain: Theory and Application in Resource-constrained Computing
Ervin Moore
Ahmed Imteaj
S. Rezapour
M. Amini
33
18
0
24 Mar 2023
Students Parrot Their Teachers: Membership Inference on Model Distillation
Matthew Jagielski
Milad Nasr
Christopher A. Choquette-Choo
Katherine Lee
Nicholas Carlini
FedML
43
21
0
06 Mar 2023
Membership Inference Attack for Beluga Whales Discrimination
Voncarlos Marcelo Araújo
Sébastien Gambs
Clément Chion
Robert Michaud
L. Schneider
H. Lautraite
33
2
0
28 Feb 2023
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen
Y. Qu
Michael Backes
Yang Zhang
30
32
0
20 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
23
2
0
04 Feb 2023
Are Diffusion Models Vulnerable to Membership Inference Attacks?
Jinhao Duan
Fei Kong
Shiqi Wang
Xiaoshuang Shi
Kaidi Xu
35
109
0
02 Feb 2023
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Sungmin Cha
Sungjun Cho
Dasol Hwang
Honglak Lee
Taesup Moon
Moontae Lee
MU
49
36
0
27 Jan 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
29
4
0
18 Jan 2023
GAN-based Domain Inference Attack
Yuechun Gu
Keke Chen
15
11
0
22 Dec 2022
1
2
3
Next