ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.12318
  4. Cited By
CAPE: Context-Aware Private Embeddings for Private Language Learning

CAPE: Context-Aware Private Embeddings for Private Language Learning

27 August 2021
Richard Plant
Dimitra Gkatzia
V. Giuffrida
ArXivPDFHTML

Papers citing "CAPE: Context-Aware Private Embeddings for Private Language Learning"

20 / 20 papers shown
Title
Anti-adversarial Learning: Desensitizing Prompts for Large Language Models
Anti-adversarial Learning: Desensitizing Prompts for Large Language Models
Xuan Li
Zhe Yin
Xiaodong Gu
Beijun Shen
AAML
MU
60
0
0
25 Apr 2025
Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks
Towards Harnessing the Collaborative Power of Large and Small Models for Domain Tasks
Yang Liu
Bingjie Yan
Tianyuan Zou
Jianqing Zhang
Zixuan Gu
...
J. Li
Xiaozhou Ye
Ye Ouyang
Qiang Yang
Yuhang Zhang
ALM
155
1
0
24 Apr 2025
Investigating User Perspectives on Differentially Private Text Privatization
Stephen Meisenbacher
Alexandra Klymenko
Alexander Karpp
Florian Matthes
57
0
0
12 Mar 2025
On the Vulnerability of Text Sanitization
On the Vulnerability of Text Sanitization
Meng Tong
Kejiang Chen
Xiaojian Yuang
Xiaozhong Liu
Wenbo Zhang
Nenghai Yu
Jie Zhang
52
0
0
22 Oct 2024
Private Language Models via Truncated Laplacian Mechanism
Private Language Models via Truncated Laplacian Mechanism
Tianhao Huang
Tao Yang
Ivan Habernal
Lijie Hu
Di Wang
35
1
0
10 Oct 2024
Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection
Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection
Xingyu Ma
Xin Tian
Lingxiang Wu
Xuepeng Wang
Xueming Tang
Jinqiao Wang
49
0
0
24 Sep 2024
Protecting Privacy in Classifiers by Token Manipulation
Protecting Privacy in Classifiers by Token Manipulation
Reém Harel
Yair Elboher
Yuval Pinter
22
1
0
01 Jul 2024
The Fire Thief Is Also the Keeper: Balancing Usability and Privacy in
  Prompts
The Fire Thief Is Also the Keeper: Balancing Usability and Privacy in Prompts
Zhili Shen
Zihang Xi
Ying He
Wei Tong
Jingyu Hua
Sheng Zhong
SILM
48
7
0
20 Jun 2024
Privacy Preserving Prompt Engineering: A Survey
Privacy Preserving Prompt Engineering: A Survey
Kennedy Edemacu
Xintao Wu
47
18
0
09 Apr 2024
A Comparative Analysis of Word-Level Metric Differential Privacy:
  Benchmarking The Privacy-Utility Trade-off
A Comparative Analysis of Word-Level Metric Differential Privacy: Benchmarking The Privacy-Utility Trade-off
Stephen Meisenbacher
Nihildev Nandakumar
Alexandra Klymenko
Florian Matthes
34
8
0
04 Apr 2024
Privacy-Preserving Language Model Inference with Instance Obfuscation
Privacy-Preserving Language Model Inference with Instance Obfuscation
Yixiang Yao
Fei Wang
Srivatsan Ravi
Muhao Chen
19
6
0
13 Feb 2024
DEPN: Detecting and Editing Privacy Neurons in Pretrained Language
  Models
DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models
Xinwei Wu
Junzhuo Li
Minghui Xu
Weilong Dong
Shuangzhi Wu
Chao Bian
Deyi Xiong
MU
KELM
27
46
0
31 Oct 2023
PrIeD-KIE: Towards Privacy Preserved Document Key Information Extraction
PrIeD-KIE: Towards Privacy Preserved Document Key Information Extraction
S. Saifullah
S. Agne
Andreas Dengel
Sheraz Ahmed
16
0
0
05 Oct 2023
Protecting User Privacy in Remote Conversational Systems: A
  Privacy-Preserving framework based on text sanitization
Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization
Zhigang Kan
Linbo Qiao
Hao Yu
Liwen Peng
Yifu Gao
Dongsheng Li
26
20
0
14 Jun 2023
Privacy-Preserving Prompt Tuning for Large Language Model Services
Privacy-Preserving Prompt Tuning for Large Language Model Services
Yansong Li
Zhixing Tan
Yang Liu
SILM
VLM
50
63
0
10 May 2023
Differentially Private Natural Language Models: Recent Advances and
  Future Directions
Differentially Private Natural Language Models: Recent Advances and Future Directions
Lijie Hu
Ivan Habernal
Lei Shen
Di Wang
AAML
30
18
0
22 Jan 2023
Fair NLP Models with Differentially Private Text Encoders
Fair NLP Models with Differentially Private Text Encoders
Gaurav Maheshwari
Pascal Denis
Mikaela Keller
A. Bellet
FedML
SILM
33
16
0
12 May 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
23
19
0
20 Apr 2022
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
218
138
0
23 Jan 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,815
0
14 Dec 2020
1