ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.15528
  4. Cited By
Membership Leakage in Label-Only Exposures

Membership Leakage in Label-Only Exposures

30 July 2020
Zheng Li
Yang Zhang
ArXivPDFHTML

Papers citing "Membership Leakage in Label-Only Exposures"

50 / 55 papers shown
Title
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift
Jiamin Chang
Hao Li
Hammond Pearce
Ruoxi Sun
Bo-wen Li
Minhui Xue
38
0
0
28 Apr 2025
Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers
Huan Tian
Guangsheng Zhang
Bo Liu
Tianqing Zhu
Ming Ding
Wanlei Zhou
53
0
0
08 Mar 2025
Guarding the Privacy of Label-Only Access to Neural Network Classifiers via iDP Verification
Guarding the Privacy of Label-Only Access to Neural Network Classifiers via iDP Verification
Anan Kabaha
Dana Drachsler-Cohen
AAML
48
0
0
23 Feb 2025
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling
Kaiyuan Zhang
Siyuan Cheng
Guangyu Shen
Bruno Ribeiro
Shengwei An
Pin-Yu Chen
Xinming Zhang
Ninghui Li
102
1
0
28 Jan 2025
Rethinking Membership Inference Attacks Against Transfer Learning
Rethinking Membership Inference Attacks Against Transfer Learning
Yanwei Yue
Jing Chen
Qianru Fang
Kun He
Ziming Zhao
Hao Ren
Guowen Xu
Yang Liu
Yang Xiang
64
34
0
20 Jan 2025
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Jiadong Lou
Xu Yuan
Rui Zhang
Xingliang Yuan
Neil Gong
N. Tzeng
AAML
42
1
0
19 Jan 2025
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
Charles Meyers
Mohammad Reza Saleh Sedghpour
Tommy Löfstedt
Erik Elmroth
OOD
AAML
33
0
0
11 Sep 2024
Membership Inference Attack Against Masked Image Modeling
Membership Inference Attack Against Masked Image Modeling
Zehan Li
Xinlei He
Ning Yu
Yang Zhang
42
1
0
13 Aug 2024
Label Smoothing Improves Machine Unlearning
Label Smoothing Improves Machine Unlearning
Zonglin Di
Zhaowei Zhu
Jinghan Jia
Jiancheng Liu
Zafar Takhirov
Bo Jiang
Yuanshun Yao
Sijia Liu
Yang Liu
34
2
0
11 Jun 2024
A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
  Privacy and OOD Challenges
A Survey of Graph Neural Networks in Real world: Imbalance, Noise, Privacy and OOD Challenges
Wei Ju
Siyu Yi
Yifan Wang
Zhiping Xiao
Zhengyan Mao
...
Senzhang Wang
Xinwang Liu
Xiao Luo
Philip S. Yu
Ming Zhang
AI4CE
36
35
0
07 Mar 2024
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
All Rivers Run to the Sea: Private Learning with Asymmetric Flows
Yue Niu
Ramy E. Ali
Saurav Prakash
Salman Avestimehr
FedML
28
2
0
05 Dec 2023
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang
Zheng Li
Ziqing Yang
Xinlei He
Michael Backes
Mario Fritz
Yang Zhang
30
4
0
19 Oct 2023
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
  Applications
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Yi Zhang
Yuying Zhao
Zhaoqing Li
Xueqi Cheng
Yu-Chiang Frank Wang
Olivera Kotevska
Philip S. Yu
Tyler Derr
26
10
0
31 Aug 2023
Membership inference attack with relative decision boundary distance
Membership inference attack with relative decision boundary distance
Jiacheng Xu
Chengxiang Tan
26
1
0
07 Jun 2023
Membership Inference Attack for Beluga Whales Discrimination
Membership Inference Attack for Beluga Whales Discrimination
Voncarlos Marcelo Araújo
Sébastien Gambs
Clément Chion
Robert Michaud
L. Schneider
H. Lautraite
30
2
0
28 Feb 2023
Prompt Stealing Attacks Against Text-to-Image Generation Models
Prompt Stealing Attacks Against Text-to-Image Generation Models
Xinyue Shen
Y. Qu
Michael Backes
Yang Zhang
27
32
0
20 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
21
2
0
04 Feb 2023
Learning to Unlearn: Instance-wise Unlearning for Pre-trained
  Classifiers
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
Sungmin Cha
Sungjun Cho
Dasol Hwang
Honglak Lee
Taesup Moon
Moontae Lee
MU
36
35
0
27 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David E. Evans
Boris Köpf
Andrew J. Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
47
35
0
21 Dec 2022
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
33
0
18 Dec 2022
Purifier: Defending Data Inference Attacks via Transforming Confidence
  Scores
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Ziqi Yang
Li-Juan Wang
D. Yang
Jie Wan
Ziming Zhao
E. Chang
Fan Zhang
Kui Ren
AAML
24
15
0
01 Dec 2022
On the Vulnerability of Data Points under Multiple Membership Inference
  Attacks and Target Models
On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models
Mauro Conti
Jiaxin Li
S. Picek
MIALM
32
2
0
28 Oct 2022
Data Poisoning Attacks Against Multimodal Encoders
Data Poisoning Attacks Against Multimodal Encoders
Ziqing Yang
Xinlei He
Zheng Li
Michael Backes
Mathias Humbert
Pascal Berrang
Yang Zhang
AAML
113
45
0
30 Sep 2022
Membership Inference Attacks and Generalization: A Causal Perspective
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta
Shiqi Shen
S. Hitarth
Shruti Tople
Prateek Saxena
OOD
MIACV
40
18
0
18 Sep 2022
Does CLIP Know My Face?
Does CLIP Know My Face?
Dominik Hintersdorf
Lukas Struppek
Manuel Brack
Felix Friedrich
P. Schramowski
Kristian Kersting
VLM
21
9
0
15 Sep 2022
M^4I: Multi-modal Models Membership Inference
M^4I: Multi-modal Models Membership Inference
Pingyi Hu
Zihan Wang
Ruoxi Sun
Hu Wang
Minhui Xue
39
26
0
15 Sep 2022
On the Privacy Risks of Cell-Based NAS Architectures
On the Privacy Risks of Cell-Based NAS Architectures
Haiping Huang
Zhikun Zhang
Yun Shen
Michael Backes
Qi Li
Yang Zhang
27
7
0
04 Sep 2022
Data Provenance via Differential Auditing
Data Provenance via Differential Auditing
Xin Mu
Ming Pang
Feida Zhu
11
1
0
04 Sep 2022
Membership Inference Attacks by Exploiting Loss Trajectory
Membership Inference Attacks by Exploiting Loss Trajectory
Yiyong Liu
Zhengyu Zhao
Michael Backes
Yang Zhang
21
98
0
31 Aug 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
Membership-Doctor: Comprehensive Assessment of Membership Inference
  Against Machine Learning Models
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Xinlei He
Zheng Li
Weilin Xu
Cory Cornelius
Yang Zhang
MIACV
30
24
0
22 Aug 2022
Membership Inference Attacks via Adversarial Examples
Membership Inference Attacks via Adversarial Examples
Hamid Jalalzai
Elie Kadoche
Rémi Leluc
Vincent Plassier
AAML
FedML
MIACV
38
7
0
27 Jul 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference
  Attacks
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
20
5
0
11 Jun 2022
Privacy for Free: How does Dataset Condensation Help Privacy?
Privacy for Free: How does Dataset Condensation Help Privacy?
Tian Dong
Bo-Lu Zhao
Lingjuan Lyu
DD
24
113
0
01 Jun 2022
Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
  Learning
Additive Logistic Mechanism for Privacy-Preserving Self-Supervised Learning
Yunhao Yang
Parham Gohari
Ufuk Topcu
18
1
0
25 May 2022
Evaluating Membership Inference Through Adversarial Robustness
Evaluating Membership Inference Through Adversarial Robustness
Zhaoxi Zhang
L. Zhang
Xufei Zheng
Bilal Hussain Abbasi
Shengshan Hu
AAML
54
14
0
14 May 2022
How to Combine Membership-Inference Attacks on Multiple Updated Models
How to Combine Membership-Inference Attacks on Multiple Updated Models
Matthew Jagielski
Stanley Wu
Alina Oprea
Jonathan R. Ullman
Roxana Geambasu
29
10
0
12 May 2022
Leveraging Adversarial Examples to Quantify Membership Information
  Leakage
Leveraging Adversarial Examples to Quantify Membership Information Leakage
Ganesh Del Grosso
Hamid Jalalzai
Georg Pichler
C. Palamidessi
Pablo Piantanida
MIACV
29
21
0
17 Mar 2022
One Parameter Defense -- Defending against Data Inference Attacks via
  Differential Privacy
One Parameter Defense -- Defending against Data Inference Attacks via Differential Privacy
Dayong Ye
Sheng Shen
Tianqing Zhu
B. Liu
Wanlei Zhou
MIACV
16
61
0
13 Mar 2022
MIAShield: Defending Membership Inference Attacks via Preemptive
  Exclusion of Members
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members
Ismat Jarin
Birhanu Eshete
29
9
0
02 Mar 2022
Membership Inference Attacks and Defenses in Neural Network Pruning
Membership Inference Attacks and Defenses in Neural Network Pruning
Xiaoyong Yuan
Lan Zhang
AAML
16
44
0
07 Feb 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
  Encoders
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
21
52
0
27 Jan 2022
Model Stealing Attacks Against Inductive Graph Neural Networks
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
19
60
0
15 Dec 2021
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
29
639
0
07 Dec 2021
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for
  Machine Learning
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning
Vasisht Duddu
S. Szyller
Nadarajah Asokan
26
12
0
04 Dec 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
30
52
0
15 Nov 2021
Generalization Techniques Empirically Outperform Differential Privacy
  against Membership Inference
Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference
Jiaxiang Liu
Simon Oya
Florian Kerschbaum
MIACV
14
9
0
11 Oct 2021
Membership Inference Attacks Against Recommender Systems
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Z. Ren
Zihan Wang
Pengjie Ren
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
26
83
0
16 Sep 2021
EncoderMI: Membership Inference against Pre-trained Encoders in
  Contrastive Learning
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Wenjie Qu
Neil Zhenqiang Gong
4
94
0
25 Aug 2021
12
Next