Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2101.00036
Cited By
KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models
31 December 2020
Yuta Nakamura
S. Hanaoka
Y. Nomura
Naoto Hayashi
O. Abe
Shuntaro Yada
Shoko Wakamiya
Nara Institute of Science
MIACV
Re-assign community
ArXiv
PDF
HTML
Papers citing
"KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models"
4 / 4 papers shown
Title
A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics
Kai He
Rui Mao
Qika Lin
Yucheng Ruan
Xiang Lan
Mengling Feng
Min Zhang
LM&MA
AILaw
98
154
0
28 Jan 2025
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
32
46
0
25 May 2023
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David Evans
Taylor Berg-Kirkpatrick
AAML
65
39
0
25 May 2022
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,824
0
14 Dec 2020
1