Two-Stage Regularization-Based Structured Pruning for LLMs

The deployment of large language models (LLMs) is largely hindered by their large number of parameters. Structural pruning has emerged as a promising solution. Prior structured pruning methods directly remove unimportant parameters based on certain metrics, which often causes knowledge loss and necessitates extensive retraining. To overcome this, we introduce a novel pruning method TRSP: Two-Stage Regularization-Based Structured Pruning for LLMs. Specifically, we multiply the output of each transformer layer by an initial learnable weight and iteratively learn these weights by adding their -norm as a regularization term to the loss function, serving as the first-stage regularization. Subsequently, we apply additional regularization to the difference between the output and input of layers with smaller weights, encouraging the shift of knowledge to the preserved layers. This serves as the second-stage regularization. TRSP retains more knowledge and better preserves model performance than direct parameter elimination. Through extensive experimentation we show that TRSP outperforms strong layer-wise structured pruning methods without requiring retraining. As a layer-wise pruning method, it delivers notable end-to-end acceleration, making it a promising solution for efficient LLM deployment.
View on arXiv@article{feng2025_2505.18232, title={ Two-Stage Regularization-Based Structured Pruning for LLMs }, author={ Mingkuan Feng and Jinyang Wu and Siyuan Liu and Shuai Zhang and Ruihan Jin and Feihu Che and Pengpeng Shao and Zhengqi Wen and Jianhua Tao }, journal={arXiv preprint arXiv:2505.18232}, year={ 2025 } }