ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15606
7
0

LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning

18 June 2025
Gabrel J. Perin
Runjin Chen
Xuxi Chen
Nina S. T. Hirata
Zhangyang Wang
Junyuan Hong
Author Contacts:
gabrieljp@usp.brchenrunjin@utexas.eduxxchen@utexas.edunina@ime.usp.bratlaswang@utexas.edujyhong@utexas.edu
    AAML
ArXiv (abs)PDFHTML
Main:9 Pages
10 Figures
Bibliography:5 Pages
9 Tables
Appendix:6 Pages
Abstract

Large Language Models (LLMs) have become indispensable in real-world applications. However, their widespread adoption raises significant safety concerns, particularly in responding to socially harmful questions. Despite substantial efforts to improve model safety through alignment, aligned models can still have their safety protections undermined by subsequent fine-tuning - even when the additional training data appears benign. In this paper, we empirically demonstrate that this vulnerability stems from the sensitivity of safety-critical low-rank subspaces in LLM parameters to fine-tuning. Building on this insight, we propose a novel training-free method, termed Low-Rank Extrapolation (LoX), to enhance safety robustness by extrapolating the safety subspace of an aligned LLM. Our experimental results confirm the effectiveness of LoX, demonstrating significant improvements in robustness against both benign and malicious fine-tuning attacks while preserving the model's adaptability to new tasks. For instance, LoX leads to 11% to 54% absolute reductions in attack success rates (ASR) facing benign or malicious fine-tuning attacks. By investigating the ASR landscape of parameters, we attribute the success of LoX to that the extrapolation moves LLM parameters to a flatter zone, thereby less sensitive to perturbations. The code is available atthis http URL.

View on arXiv
@article{perin2025_2506.15606,
  title={ LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning },
  author={ Gabrel J. Perin and Runjin Chen and Xuxi Chen and Nina S. T. Hirata and Zhangyang Wang and Junyuan Hong },
  journal={arXiv preprint arXiv:2506.15606},
  year={ 2025 }
}
Comments on this paper