Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.00382
Cited By
v1
v2 (latest)
Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
1 October 2024
Shota Takashiro
Takeshi Kojima
Andrew Gambardella
Qi Cao
Yusuke Iwasawa
Y. Matsuo
CLL
MU
KELM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning"
1 / 1 papers shown
Title
SoK: Machine Unlearning for Large Language Models
Jie Ren
Yue Xing
Yingqian Cui
Charu C. Aggarwal
Hui Liu
MU
34
0
0
10 Jun 2025
1