ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13181
16
0

Align-then-Unlearn: Embedding Alignment for LLM Unlearning

16 June 2025
Philipp Spohn
Leander Girrbach
Jessica Bader
Zeynep Akata
    MU
ArXiv (abs)PDFHTML
Main:4 Pages
6 Figures
Bibliography:2 Pages
1 Tables
Appendix:2 Pages
Abstract

As large language models (LLMs) are trained on massive datasets, they have raised significant privacy and ethical concerns due to their potential to inadvertently retain sensitive information. Unlearning seeks to selectively remove specific data from trained models, such as personal information or copyrighted content. Current approaches targeting specific output sequences at the token level often fail to achieve complete forgetting and remain susceptible to prompt rephrasing. We propose Align-then-Unlearn, a novel framework that performs unlearning in the semantic embedding space rather than directly on output tokens. Align-then-Unlearn first augments the LLM with an embedding prediction module trained to anticipate future context representations. Unlearning is then achieved by fine-tuning the model to minimize the similarity between these predicted embeddings and a target embedding that represents the concept to be removed. Initial results show that Align-then-Unlearn effectively removes targeted knowledge with minimal degradation in overall model utility. These findings suggest that embedding-based unlearning offers a promising and robust approach to removing conceptual knowledge. Our code is available atthis https URL.

View on arXiv
@article{spohn2025_2506.13181,
  title={ Align-then-Unlearn: Embedding Alignment for LLM Unlearning },
  author={ Philipp Spohn and Leander Girrbach and Jessica Bader and Zeynep Akata },
  journal={arXiv preprint arXiv:2506.13181},
  year={ 2025 }
}
Comments on this paper