40
0

Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language Models

Abstract

With the increasing size of Large Vision-Language Models (LVLMs), network pruning techniques aimed at compressing models for deployment in resource-constrained environments have garnered significant attention. However, we observe that pruning often leads to a degradation in safety performance. To address this issue, we present a novel and lightweight approach, termed Hierarchical Safety Realignment (HSR). HSR operates by first quantifying the contribution of each attention head to safety, identifying the most critical ones, and then selectively restoring neurons directly within these attention heads that play a pivotal role in maintaining safety. This process hierarchically realigns the safety of pruned LVLMs, progressing from the attention head level to the neuron level. We validate HSR across various models and pruning strategies, consistently achieving notable improvements in safety performance. To our knowledge, this is the first work explicitly focused on restoring safety in LVLMs post-pruning.

View on arXiv
@article{li2025_2505.16104,
  title={ Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language Models },
  author={ Yue Li and Xin Yi and Dongsheng Shi and Gerard de Melo and Xiaoling Wang and Linlin Wang },
  journal={arXiv preprint arXiv:2505.16104},
  year={ 2025 }
}
Comments on this paper