60
0

Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs

Main:4 Pages
1 Figures
Bibliography:2 Pages
2 Tables
Abstract

This paper describes LIBU (LoRA enhanced influence-based unlearning), an algorithm to solve the task of unlearning - removing specific knowledge from a large language model without retraining from scratch and compromising its overall utility (SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models). The algorithm combines classical \textit{influence functions} to remove the influence of the data from the model and \textit{second-order optimization} to stabilize the overall utility. Our experiments show that this lightweight approach is well applicable for unlearning LLMs in different kinds of task.

View on arXiv
@article{kudelya2025_2506.04044,
  title={ Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs },
  author={ Aleksey Kudelya and Alexander Shirnin },
  journal={arXiv preprint arXiv:2506.04044},
  year={ 2025 }
}
Comments on this paper