Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.05520
Cited By
What Does it Mean for a Language Model to Preserve Privacy?
11 February 2022
Hannah Brown
Katherine Lee
Fatemehsadat Mireshghallah
Reza Shokri
Florian Tramèr
PILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What Does it Mean for a Language Model to Preserve Privacy?"
50 / 151 papers shown
Title
Can Language Models be Instructed to Protect Personal Information?
Yang Chen
Ethan Mendes
Sauvik Das
Wei-ping Xu
Alan Ritter
PILM
27
35
0
03 Oct 2023
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
Vaidehi Patil
Peter Hase
Joey Tianyi Zhou
KELM
AAML
31
100
0
29 Sep 2023
Recent Advances of Differential Privacy in Centralized Deep Learning: A Systematic Survey
Lea Demelius
Roman Kern
Andreas Trügler
SyDa
FedML
36
6
0
28 Sep 2023
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
32
24
0
27 Sep 2023
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
Zhiping Zhang
Michelle Jia
Hao-Ping Lee
Bingsheng Yao
Sauvik Das
Ada Lerner
Dakuo Wang
Tianshi Li
SILM
ELM
24
70
0
20 Sep 2023
Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning
Zachary B. Charles
Nicole Mitchell
Krishna Pillutla
Michael Reneer
Zachary Garrett
FedML
AI4CE
38
28
0
18 Jul 2023
A Comprehensive Overview of Large Language Models
Humza Naveed
Asad Ullah Khan
Shi Qiu
Muhammad Saqib
Saeed Anwar
Muhammad Usman
Naveed Akhtar
Nick Barnes
Ajmal Mian
OffRL
70
538
0
12 Jul 2023
Citation: A Key to Building Responsible and Accountable Large Language Models
Jie Huang
Kevin Chen-Chuan Chang
HILM
51
17
0
05 Jul 2023
Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities
Xudong Shen
H. Brown
Jiashu Tao
Martin Strobel
Yao Tong
Akshay Narayan
Harold Soh
Finale Doshi-Velez
32
3
0
22 Jun 2023
Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization
Zhigang Kan
Linbo Qiao
Hao Yu
Liwen Peng
Yifu Gao
Dongsheng Li
28
20
0
14 Jun 2023
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Irene Solaiman
Zeerak Talat
William Agnew
Lama Ahmad
Dylan K. Baker
...
Marie-Therese Png
Shubham Singh
A. Strait
Lukas Struppek
Arjun Subramonian
ELM
EGVM
41
104
0
09 Jun 2023
Privacy Aware Question-Answering System for Online Mental Health Risk Assessment
P. Chhikara
Ujjwal Pasupulety
J. Marshall
Dhiraj Chaurasia
Shwetanjali Kumari
AI4MH
11
3
0
09 Jun 2023
Training Data Extraction From Pre-trained Language Models: A Survey
Shotaro Ishihara
34
46
0
25 May 2023
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization
Aman Priyanshu
Supriti Vijay
Ayush Kumar
Rakshit Naidu
Fatemehsadat Mireshghallah
SILM
38
24
0
24 May 2023
Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage
Hanyin Shao
Jie Huang
Shen Zheng
Kevin Chen-Chuan Chang
PILM
22
25
0
22 May 2023
TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks
Shubhra (Santu) Karmaker
Dongji Feng
30
51
0
19 May 2023
Beyond the Safeguards: Exploring the Security Risks of ChatGPT
Erik Derner
Kristina Batistic
SILM
35
65
0
13 May 2023
Large Language Models in Sport Science & Medicine: Opportunities, Risks and Considerations
M. Connor
Michael OÑeill
LM&MA
14
6
0
05 May 2023
Mitigating Approximate Memorization in Language Models via Dissimilarity Learned Policy
Aly M. Kassem
26
2
0
02 May 2023
Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Casey Meehan
Florian Bordes
Pascal Vincent
Kamalika Chaudhuri
Chuan Guo
36
18
0
26 Apr 2023
Man vs the machine: The Struggle for Effective Text Anonymisation in the Age of Large Language Models
Constantinos Patsakis
Nikolaos Lykousas
25
9
0
22 Mar 2023
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
29
103
0
20 Mar 2023
Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review
Lixiang Yan
Lele Sha
Linxuan Zhao
Yuheng Li
Roberto Martínez-Maldonado
Guanliang Chen
Xinyu Li
Yueqiao Jin
D. Gašević
SyDa
ELM
AI4Ed
59
270
0
17 Mar 2023
How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy
Natalia Ponomareva
Hussein Hazimeh
Alexey Kurakin
Zheng Xu
Carson E. Denison
H. B. McMahan
Sergei Vassilvitskii
Steve Chien
Abhradeep Thakurta
99
167
0
01 Mar 2023
Analyzing Leakage of Personally Identifiable Information in Language Models
Nils Lukas
A. Salem
Robert Sim
Shruti Tople
Lukas Wutschitz
Santiago Zanella Béguelin
PILM
24
214
0
01 Feb 2023
Extracting Training Data from Diffusion Models
Nicholas Carlini
Jamie Hayes
Milad Nasr
Matthew Jagielski
Vikash Sehwag
Florian Tramèr
Borja Balle
Daphne Ippolito
Eric Wallace
DiffM
68
572
0
30 Jan 2023
Context-Aware Differential Privacy for Language Modeling
M. H. Dinh
Ferdinando Fioretto
30
2
0
28 Jan 2023
Differentially Private Natural Language Models: Recent Advances and Future Directions
Lijie Hu
Ivan Habernal
Lei Shen
Di Wang
AAML
35
18
0
22 Jan 2023
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
34
2
0
14 Dec 2022
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
Florian Tramèr
Gautam Kamath
Nicholas Carlini
SILM
54
67
0
13 Dec 2022
Understanding How Model Size Affects Few-shot Instruction Prompting
Ayrton San Joaquin
Ardy Haroen
29
0
0
04 Dec 2022
Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue Systems
Songbo Hu
Ivan Vulić
Fangyu Liu
Anna Korhonen
46
0
0
07 Nov 2022
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Daphne Ippolito
Florian Tramèr
Milad Nasr
Chiyuan Zhang
Matthew Jagielski
Katherine Lee
Christopher A. Choquette-Choo
Nicholas Carlini
PILM
MU
23
59
0
31 Oct 2022
Differentially Private Language Models for Secure Data Sharing
Justus Mattern
Zhijing Jin
Benjamin Weggenmann
Bernhard Schoelkopf
Mrinmaya Sachan
SyDa
19
47
0
25 Oct 2022
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
79
85
0
14 Oct 2022
PQLM -- Multilingual Decentralized Portable Quantum Language Model for Privacy Protection
Shuyue Stella Li
Xiangyu Zhang
Shu Zhou
Hongchao Shu
Ruixing Liang
Hexin Liu
Leibny Paola García
45
23
0
06 Oct 2022
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
Joel Jang
Dongkeun Yoon
Sohee Yang
Sungmin Cha
Moontae Lee
Lajanugen Logeswaran
Minjoon Seo
KELM
PILM
MU
149
195
0
04 Oct 2022
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
Peter Henderson
M. Krass
Lucia Zheng
Neel Guha
Christopher D. Manning
Dan Jurafsky
Daniel E. Ho
AILaw
ELM
133
97
0
01 Jul 2022
Democratizing Ethical Assessment of Natural Language Generation Models
A. Rasekh
Ian W. Eisenberg
ELM
33
1
0
30 Jun 2022
Are Large Pre-Trained Language Models Leaking Your Personal Information?
Jie Huang
Hanyin Shao
Kevin Chen-Chuan Chang
PILM
22
178
0
25 May 2022
Memorization in NLP Fine-tuning Methods
Fatemehsadat Mireshghallah
Archit Uniyal
Tianhao Wang
David Evans
Taylor Berg-Kirkpatrick
AAML
65
39
0
25 May 2022
Just Fine-tune Twice: Selective Differential Privacy for Large Language Models
Weiyan Shi
Ryan Shea
Si-An Chen
Chiyuan Zhang
R. Jia
Zhou Yu
AAML
34
38
0
15 Apr 2022
Mix and Match: Learning-free Controllable Text Generation using Energy Language Models
Fatemehsadat Mireshghallah
Kartik Goyal
Taylor Berg-Kirkpatrick
36
78
0
24 Mar 2022
Do Language Models Plagiarize?
Jooyoung Lee
Thai Le
Jinghui Chen
Dongwon Lee
36
74
0
15 Mar 2022
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
32
152
0
08 Mar 2022
Quantifying Memorization Across Neural Language Models
Nicholas Carlini
Daphne Ippolito
Matthew Jagielski
Katherine Lee
Florian Tramèr
Chiyuan Zhang
PILM
28
585
0
15 Feb 2022
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
350
0
13 Oct 2021
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
242
595
0
14 Jul 2021
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,831
0
14 Dec 2020
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
246
290
0
17 Mar 2020
Previous
1
2
3
4
Next