ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08305
  4. Cited By
Membership Inference Attack Susceptibility of Clinical Language Models

Membership Inference Attack Susceptibility of Clinical Language Models

16 April 2021
Abhyuday N. Jagannatha
Bhanu Pratap Singh Rawat
Hong-ye Yu
    MIACV
ArXiv (abs)PDFHTML

Papers citing "Membership Inference Attack Susceptibility of Clinical Language Models"

19 / 19 papers shown
Title
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
Yashothara Shanmugarasa
Ming Ding
M. Chamikara
Thierry Rakotoarivelo
PILMAILaw
55
0
0
15 Jun 2025
SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
Kaiyuan Zhang
Siyuan Cheng
Hanxi Guo
Yuetian Chen
Zian Su
...
Yuntao Du
Charles Fleming
Ashish Kundu
Xiangyu Zhang
Ninghui Li
AAML
132
0
0
12 Jun 2025
Fragments to Facts: Partial-Information Fragment Inference from LLMs
Fragments to Facts: Partial-Information Fragment Inference from LLMs
Lucas Rosenblatt
Bin Han
Robert Wolfe
Bill Howe
AAML
57
0
0
20 May 2025
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Can Differentially Private Fine-tuning LLMs Protect Against Privacy Attacks?
Hao Du
Shang Liu
Yang Cao
AAML
134
0
0
28 Apr 2025
Synthetic Data Privacy Metrics
Synthetic Data Privacy Metrics
Amy Steier
Lipika Ramaswamy
Andre Manoel
Alexa Haushalter
99
1
0
08 Jan 2025
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions
Hao Du
Shang Liu
Lele Zheng
Yang Cao
Atsuyoshi Nakamura
Lei Chen
AAML
185
5
0
21 Dec 2024
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Yujuan Fu
Özlem Uzuner
Meliha Yetisgen
Fei Xia
116
8
0
24 Oct 2024
Detecting Training Data of Large Language Models via Expectation Maximization
Detecting Training Data of Large Language Models via Expectation Maximization
Gyuwan Kim
Yang Li
Evangelia Spiliopoulou
Jie Ma
Miguel Ballesteros
William Yang Wang
MIALM
268
4
2
10 Oct 2024
Undesirable Memorization in Large Language Models: A Survey
Undesirable Memorization in Large Language Models: A Survey
Ali Satvaty
Suzan Verberne
Fatih Turkmen
ELMPILM
191
7
0
03 Oct 2024
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method
Weichao Zhang
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
146
16
0
23 Sep 2024
Detecting Pretraining Data from Large Language Models
Detecting Pretraining Data from Large Language Models
Weijia Shi
Anirudh Ajith
Mengzhou Xia
Yangsibo Huang
Daogao Liu
Terra Blevins
Danqi Chen
Luke Zettlemoyer
MIALM
122
201
0
25 Oct 2023
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
FLTrojan: Privacy Leakage Attacks against Federated Language Models Through Selective Weight Tampering
Md Rafi Ur Rashid
Vishnu Asutosh Dasu
Kang Gu
Najrin Sultana
Shagufta Mehnaz
AAMLFedML
174
12
0
24 Oct 2023
Did the Neurons Read your Book? Document-level Membership Inference for
  Large Language Models
Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models
Matthieu Meeus
Shubham Jain
Marek Rei
Yves-Alexandre de Montjoye
MIALM
83
33
0
23 Oct 2023
Assessing Privacy Risks in Language Models: A Case Study on
  Summarization Tasks
Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks
Ruixiang Tang
Gord Lueck
Rodolfo Quispe
Huseyin A. Inan
Janardhan Kulkarni
Helen Zhou
81
8
0
20 Oct 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
171
235
0
01 Feb 2023
Swing Distillation: A Privacy-Preserving Knowledge Distillation
  Framework
Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Junzhuo Li
Xinwei Wu
Weilong Dong
Shuangzhi Wu
Chao Bian
Deyi Xiong
113
4
0
16 Dec 2022
Differentially Private Decoding in Large Language Models
Differentially Private Decoding in Large Language Models
Jimit Majmudar
Christophe Dupuy
Charith Peris
S. Smaili
Rahul Gupta
R. Zemel
64
31
0
26 May 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
108
168
0
08 Mar 2022
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
118
446
0
14 Mar 2021
1