ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16504
  4. Cited By
Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions
v1v2 (latest)

Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions

21 December 2024
Hao Du
Shang Liu
Lele Zheng
Yang Cao
Atsuyoshi Nakamura
Lei Chen
    AAML
ArXiv (abs)PDFHTML

Papers citing "Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and Future Directions"

36 / 36 papers shown
Title
The Future of Continual Learning in the Era of Foundation Models: Three Key Directions
The Future of Continual Learning in the Era of Foundation Models: Three Key Directions
Jack Bell
Luigi Quarantiello
Eric Nuertey Coleman
Lanpei Li
Malio Li
Mauro Madeddu
Elia Piccoli
Vincenzo Lomonaco
KELM
34
0
0
03 Jun 2025
SALAD: Systematic Assessment of Machine Unlearing on LLM-Aided Hardware Design
SALAD: Systematic Assessment of Machine Unlearing on LLM-Aided Hardware Design
Zeng Wang
Minghao Shao
Rupesh Karn
Jitendra Bhandari
Likhitha Mankali
Ramesh Karri
Ozgur Sinanoglu
Muhammad Shafique
J. Knechtel
29
0
0
02 Jun 2025
What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices
Sander Noels
Guillaume Bied
Maarten Buyl
Alexander Rogiers
Yousra Fettach
Jefrey Lijffijt
Tijl De Bie
103
1
0
04 Apr 2025
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding
Zeng Wang
Minghao Shao
M. Nabeel
P. Roy
Likhitha Mankali
Jitendra Bhandari
Ramesh Karri
Ozgur Sinanoglu
Muhammad Shafique
J. Knechtel
187
1
0
17 Mar 2025
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A
  Survey
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Ling Liu
AAML
132
46
0
26 Sep 2024
Privacy Backdoors: Enhancing Membership Inference through Poisoning
  Pre-trained Models
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
Yuxin Wen
Leo Marchyok
Sanghyun Hong
Jonas Geiping
Tom Goldstein
Nicholas Carlini
SILMAAML
82
16
0
01 Apr 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
305
403
0
21 Mar 2024
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy
  Traps
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
Ruixuan Liu
Tianhao Wang
Yang Cao
Li Xiong
AAMLSILM
134
19
0
14 Mar 2024
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey
Biwei Yan
Kun Li
Minghui Xu
Yueyan Dong
Yue Zhang
Zhaochun Ren
Xiuzhen Cheng
AILawPILM
147
88
0
08 Mar 2024
Unlearn What You Want to Forget: Efficient Unlearning for LLMs
Unlearn What You Want to Forget: Efficient Unlearning for LLMs
Jiaao Chen
Diyi Yang
MU
97
162
0
31 Oct 2023
InferDPT: Privacy-Preserving Inference for Black-box Large Language
  Model
InferDPT: Privacy-Preserving Inference for Black-box Large Language Model
Meng Tong
Kejiang Chen
Jie Zhang
Yuang Qi
Weiming Zhang
Neng H. Yu
Tianwei Zhang
Zhikun Zhang
SILM
96
3
0
18 Oct 2023
Privacy in Large Language Models: Attacks, Defenses and Future
  Directions
Privacy in Large Language Models: Attacks, Defenses and Future Directions
Haoran Li
Yulin Chen
Jinglong Luo
Yan Kang
Xiaojin Zhang
Qi Hu
Chunkit Chan
Yangqiu Song
PILM
118
45
0
16 Oct 2023
Who's Harry Potter? Approximate Unlearning in LLMs
Who's Harry Potter? Approximate Unlearning in LLMs
Ronen Eldan
M. Russinovich
MUMoMe
169
217
0
03 Oct 2023
Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy
  Protection
Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy Protection
Yu Chen
Tingxin Li
Huiming Liu
Yang Yu
76
31
0
06 Sep 2023
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt
  Injection
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
Jun Yan
Vikas Yadav
Shiyang Li
Lichang Chen
Zheng Tang
Hai Wang
Vijay Srinivasan
Xiang Ren
Hongxia Jin
SILM
103
104
0
31 Jul 2023
Controlling the Extraction of Memorized Data from Large Language Models
  via Prompt-Tuning
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Mustafa Safa Ozdayi
Charith Peris
Jack G. M. FitzGerald
Christophe Dupuy
Jimit Majmudar
Haidar Khan
Rahil Parikh
Rahul Gupta
70
34
0
19 May 2023
Privacy-Preserving Prompt Tuning for Large Language Model Services
Privacy-Preserving Prompt Tuning for Large Language Model Services
Yansong Li
Zhixing Tan
Yang Liu
SILMVLM
110
69
0
10 May 2023
Towards Building the Federated GPT: Federated Instruction Tuning
Towards Building the Federated GPT: Federated Instruction Tuning
Jianyi Zhang
Saeed Vahidian
Martin Kuo
Chunyuan Li
Ruiyi Zhang
Tong Yu
Yufan Zhou
Guoyin Wang
Yiran Chen
ALMFedML
85
132
0
09 May 2023
DP-BART for Privatized Text Rewriting under Local Differential Privacy
DP-BART for Privatized Text Rewriting under Local Differential Privacy
Timour Igamberdiev
Ivan Habernal
94
18
0
15 Feb 2023
Offsite-Tuning: Transfer Learning without Full Model
Offsite-Tuning: Transfer Learning without Full Model
Guangxuan Xiao
Ji Lin
Song Han
90
76
0
09 Feb 2023
Analyzing Leakage of Personally Identifiable Information in Language
  Models
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
193
235
0
01 Feb 2023
Privately Fine-Tuning Large Language Models with Differential Privacy
Privately Fine-Tuning Large Language Models with Differential Privacy
R. Behnia
Mohammadreza Ebrahimi
Jason L. Pacheco
B. Padmanabhan
127
51
0
26 Oct 2022
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Joel Jang
Dongkeun Yoon
Sohee Yang
Sungmin Cha
Moontae Lee
Lajanugen Logeswaran
Minjoon Seo
KELMPILMMU
226
239
0
04 Oct 2022
Differentially Private Optimization on Large Model at Small Cost
Differentially Private Optimization on Large Model at Small Cost
Zhiqi Bu
Yu Wang
Sheng Zha
George Karypis
114
55
0
30 Sep 2022
Recovering Private Text in Federated Learning of Language Models
Recovering Private Text in Federated Learning of Language Models
Samyak Gupta
Yangsibo Huang
Zexuan Zhong
Tianyu Gao
Kai Li
Danqi Chen
FedML
117
80
0
17 May 2022
Locating and Editing Factual Associations in GPT
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
263
1,392
0
10 Feb 2022
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
262
373
0
13 Oct 2021
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
262
1,246
0
18 Jun 2021
Differential Privacy for Text Analytics via Natural Text Sanitization
Differential Privacy for Text Analytics via Natural Text Sanitization
Xiang Yue
Minxin Du
Tianhao Wang
Yaliang Li
Huan Sun
Sherman S. M. Chow
104
86
0
02 Jun 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
746
4,124
0
18 Apr 2021
Membership Inference Attack Susceptibility of Clinical Language Models
Membership Inference Attack Susceptibility of Clinical Language Models
Abhyuday N. Jagannatha
Bhanu Pratap Singh Rawat
Hong-ye Yu
MIACV
80
66
0
16 Apr 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
254
4,335
0
01 Jan 2021
Analyzing Information Leakage of Updates to Natural Language Models
Analyzing Information Leakage of Updates to Natural Language Models
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
Victor Rühle
Andrew Paverd
O. Ohrimenko
Boris Köpf
Marc Brockschmidt
ELMMIACVFedMLPILMKELM
86
127
0
17 Dec 2019
Leveraging Hierarchical Representations for Preserving Privacy and
  Utility in Text
Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text
Oluwaseyi Feyisetan
Tom Diethe
Thomas Drake
71
75
0
20 Oct 2019
Parameter-Efficient Transfer Learning for NLP
Parameter-Efficient Transfer Learning for NLP
N. Houlsby
A. Giurgiu
Stanislaw Jastrzebski
Bruna Morrone
Quentin de Laroussilhe
Andrea Gesmundo
Mona Attariyan
Sylvain Gelly
240
4,553
0
02 Feb 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.9K
95,604
0
11 Oct 2018
1