Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.13920
Cited By
Canary Extraction in Natural Language Understanding Models
25 March 2022
Rahil Parikh
Christophe Dupuy
Rahul Gupta
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Canary Extraction in Natural Language Understanding Models"
16 / 16 papers shown
Title
NLP Security and Ethics, in the Wild
Heather Lent
Erick Galinkin
Yiyi Chen
Jens Myrup Pedersen
Leon Derczynski
Johannes Bjerva
SILM
47
0
0
09 Apr 2025
RAG-Thief: Scalable Extraction of Private Data from Retrieval-Augmented Generation Applications with Agent-based Attacks
Changyue Jiang
Xudong Pan
Geng Hong
Chenfu Bao
Min Yang
SILM
77
10
0
21 Nov 2024
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
Yuqing Nie
Chong Wang
Kaidi Wang
Guoai Xu
Guosheng Xu
Haoyu Wang
OffRL
211
1
0
11 Oct 2024
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and Defense
Yixiang Qiu
Hongyao Yu
Hao Fang
Wenbo Yu
Wenbo Yu
Bin Chen
Shu-Tao Xia
Ke Xu
Ke Xu
AAML
43
1
0
07 Oct 2024
Defining 'Good': Evaluation Framework for Synthetic Smart Meter Data
Sheng Chai
Gus Chadney
Charlot Avery
Phil Grunewald
Pascal Van Hentenryck
P. Donti
33
6
0
16 Jul 2024
Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey
Shang Wang
Tianqing Zhu
Bo Liu
Ming Ding
Xu Guo
Dayong Ye
Wanlei Zhou
Philip S. Yu
PILM
71
17
0
12 Jun 2024
Reconstructing training data from document understanding models
Jérémie Dentan
Arnaud Paran
A. Shabou
AAML
SyDa
54
1
0
05 Jun 2024
Privacy-preserving Fine-tuning of Large Language Models through Flatness
Tiejin Chen
Longchao Da
Huixue Zhou
Pingzhi Li
Kaixiong Zhou
Tianlong Chen
Hua Wei
29
5
0
07 Mar 2024
Text Embedding Inversion Security for Multilingual Language Models
Yiyi Chen
Heather Lent
Johannes Bjerva
27
14
0
22 Jan 2024
A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly
Yifan Yao
Jinhao Duan
Kaidi Xu
Yuanfang Cai
Eric Sun
Yue Zhang
PILM
ELM
54
476
0
04 Dec 2023
Privacy in Large Language Models: Attacks, Defenses and Future Directions
Haoran Li
Yulin Chen
Jinglong Luo
Yan Kang
Xiaojin Zhang
Qi Hu
Chunkit Chan
Yangqiu Song
PILM
50
42
0
16 Oct 2023
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Dominik Hintersdorf
Lukas Struppek
Kristian Kersting
SILM
33
4
0
18 Aug 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
37
46
0
25 May 2023
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
24
214
0
01 Feb 2023
CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning
Samuel Maddock
Alexandre Sablayrolles
Pierre Stock
FedML
22
22
0
06 Oct 2022
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,831
0
14 Dec 2020
1