74
4

Towards Secure Tuning: Mitigating Security Risks Arising from Benign Instruction Fine-Tuning

Abstract

Instruction Fine-Tuning (IFT) has become an essential method for adapting base Large Language Models (LLMs) into variants for professional and private use. However, researchers have raised concerns over a significant decrease in LLMs' security following IFT, even when the IFT process involves entirely benign instructions (termed Benign IFT). Our study represents a pioneering effort to mitigate the security risks arising from Benign IFT. Specifically, we conduct a Module Robustness Analysis, aiming to investigate how LLMs' internal modules contribute to their security. Based on our analysis, we propose a novel IFT strategy, called the Modular Layer-wise Learning Rate (ML-LR) strategy. In our analysis, we implement a simple security feature classifier that serves as a proxy to measure the robustness of modules (e.g. QQ/KK/VV, etc.). Our findings reveal that the module robustness shows clear patterns, varying regularly with the module type and the layer depth. Leveraging these insights, we develop a proxy-guided search algorithm to identify a robust subset of modules, termed ModsRobust_{Robust}. During IFT, the ML-LR strategy employs differentiated learning rates for ModsRobust_{Robust} and the rest modules. Our experimental results show that in security assessments, the application of our ML-LR strategy significantly mitigates the rise in harmfulness of LLMs following Benign IFT. Notably, our ML-LR strategy has little impact on the usability or expertise of LLMs following Benign IFT. Furthermore, we have conducted comprehensive analyses to verify the soundness and flexibility of our ML-LR strategy.

View on arXiv
@article{du2025_2410.04524,
  title={ Toward Secure Tuning: Mitigating Security Risks from Instruction Fine-Tuning },
  author={ Yanrui Du and Sendong Zhao and Jiawei Cao and Ming Ma and Danyang Zhao and Shuren Qi and Fenglei Fan and Ting Liu and Bing Qin },
  journal={arXiv preprint arXiv:2410.04524},
  year={ 2025 }
}
Comments on this paper