ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.15831
  4. Cited By
Efficient and Private: Memorisation under differentially private
  parameter-efficient fine-tuning in language models

Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models

24 November 2024
Olivia Ma
Jonathan Passerat-Palmbach
Dmitrii Usynin
ArXiv (abs)PDFHTML

Papers citing "Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models"

21 / 21 papers shown
Title
Mind the Privacy Unit! User-Level Differential Privacy for Language
  Model Fine-Tuning
Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning
Lynn Chua
Badih Ghazi
Yangsibo Huang
Pritish Kamath
Ravi Kumar
Daogao Liu
Pasin Manurangsi
Amer Sinha
Chiyuan Zhang
90
13
0
20 Jun 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
246
397
0
21 Mar 2024
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches
  For Speech Emotion Recognition Using Pre-trained Speech Models
PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models
Tiantian Feng
Shrikanth Narayanan
90
30
0
08 Jun 2023
Membership Inference Attacks against Language Models via Neighbourhood
  Comparison
Membership Inference Attacks against Language Models via Neighbourhood Comparison
Justus Mattern
Fatemehsadat Mireshghallah
Zhijing Jin
Bernhard Schölkopf
Mrinmaya Sachan
Taylor Berg-Kirkpatrick
MIALM
91
189
0
29 May 2023
Amplifying Membership Exposure via Data Poisoning
Amplifying Membership Exposure via Data Poisoning
Yufei Chen
Chao Shen
Yun Shen
Cong Wang
Yang Zhang
AAML
109
32
0
01 Nov 2022
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Yaqing Wang
Sahaj Agarwal
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
95
133
0
31 Oct 2022
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
89
141
0
15 Jun 2022
Emergent Abilities of Large Language Models
Emergent Abilities of Large Language Models
Jason W. Wei
Yi Tay
Rishi Bommasani
Colin Raffel
Barret Zoph
...
Tatsunori Hashimoto
Oriol Vinyals
Percy Liang
J. Dean
W. Fedus
ELMReLMLRM
288
2,518
0
15 Jun 2022
Quantifying Privacy Risks of Masked Language Models Using Membership
  Inference Attacks
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
68
166
0
08 Mar 2022
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
240
371
0
13 Oct 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
493
10,526
0
17 Jun 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
Basel Alomair
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAUSILM
509
1,953
0
14 Dec 2020
ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the
  Privacy Risks of Machine Learning
ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning
S. K. Murakonda
Reza Shokri
47
76
0
18 Jul 2020
Effects of Differential Privacy and Data Skewness on Membership
  Inference Vulnerability
Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability
Stacey Truex
Ling Liu
Mehmet Emre Gursoy
Wenqi Wei
Lei Yu
MIACV
54
46
0
21 Nov 2019
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
  lighter
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
255
7,554
0
02 Oct 2019
Differential Privacy Has Disparate Impact on Model Accuracy
Differential Privacy Has Disparate Impact on Model Accuracy
Eugene Bagdasaryan
Vitaly Shmatikov
149
482
0
28 May 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,229
0
11 Oct 2018
The Secret Sharer: Evaluating and Testing Unintended Memorization in
  Neural Networks
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
Basel Alomair
153
1,149
0
22 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
143
1,854
0
15 Dec 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
278
4,160
0
18 Oct 2016
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedMLSyDa
216
6,172
0
01 Jul 2016
1