ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.00382
  4. Cited By
Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
v1v2 (latest)

Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

1 October 2024
Shota Takashiro
Takeshi Kojima
Andrew Gambardella
Qi Cao
Yusuke Iwasawa
Y. Matsuo
    CLLMUKELM
ArXiv (abs)PDFHTML

Papers citing "Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning"

1 / 1 papers shown
Title
SoK: Machine Unlearning for Large Language Models
Jie Ren
Yue Xing
Yingqian Cui
Charu C. Aggarwal
Hui Liu
MU
34
0
0
10 Jun 2025
1