Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
- CLLMUKELM

As large language models (LLMs) are applied across diverse domains, the ability to selectively unlearn specific information is becoming increasingly essential. For instance, LLMs are expected to selectively provide confidential information to authorized internal users, such as employees or trusted partners, while withholding it from external users, including the general public and unauthorized entities. Therefore, we propose a novel method termed ``in-context knowledge unlearning'', which enables the model to selectively forget information in test-time based on the query context. Our method fine-tunes pre-trained LLMs to enable prompt unlearning of target knowledge within the context, while preserving unrelated information. Experiments on TOFU, AGE and RWKU datasets using Llama2-7B/13B and Mistral-7B models demonstrate that our method achieves up to 95% forget accuracy while retaining 80% of unrelated knowledge, significantly outperforming baselines in both in-domain and out-of-domain scenarios. Further investigation of the model's internal behavior revealed that while fine-tuned LLMs generate correct predictions in the middle layers and preserve them up to the final layer. However, the decision to forget is made only at the last layer, i.e. ``LLMs pretend to forget''. Our findings offer valuable insight into the improvement of the robustness of the unlearning mechanisms in LLMs, laying a foundation for future research in the field.
View on arXiv@article{takashiro2025_2410.00382, title={ Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning }, author={ Shota Takashiro and Takeshi Kojima and Andrew Gambardella and Qi Cao and Yusuke Iwasawa and Yutaka Matsuo }, journal={arXiv preprint arXiv:2410.00382}, year={ 2025 } }