23
0

ELDeR: Getting Efficient LLMs through Data-Driven Regularized Layer-wise Pruning

Main:8 Pages
8 Figures
Bibliography:3 Pages
10 Tables
Appendix:4 Pages
Abstract

The deployment of Large language models (LLMs) in many fields is largely hindered by their high computational and memory costs. Recent studies suggest that LLMs exhibit sparsity, which can be used for pruning. Previous pruning methods typically follow a prune-then-finetune paradigm. Since the pruned parts still contain valuable information, statically removing them without updating the remaining parameters often results in irreversible performance degradation, requiring costly recovery fine-tuning (RFT) to maintain performance. To address this, we propose a novel paradigm: first apply regularization, then prune. Based on this paradigm, we propose ELDeR: Getting Efficient LLMs through Data-Driven Regularized Layer-wise Pruning. We multiply the output of each transformer layer by an initial weight, then we iteratively learn the weights of each transformer layer by using a small amount of data in a simple way. After that, we apply regularization to the difference between the output and input of the layers with smaller weights, forcing the information to be transferred to the remaining layers. Compared with direct pruning, ELDeR reduces the information loss caused by direct parameter removal, thus better preserving the model's language modeling ability. Experimental results show that ELDeR achieves superior performance compared with powerful layer-wise structured pruning methods, while greatly reducing RFT computational costs. Since ELDeR is a layer-wise pruning method, its end-to-end acceleration effect is obvious, making it a promising technique for efficient LLMs.

View on arXiv
@article{feng2025_2505.18232,
  title={ Two-Stage Regularization-Based Structured Pruning for LLMs },
  author={ Mingkuan Feng and Jinyang Wu and Siyuan Liu and Shuai Zhang and Ruihan Jin and Feihu Che and Pengpeng Shao and Zhengqi Wen and Jianhua Tao },
  journal={arXiv preprint arXiv:2505.18232},
  year={ 2025 }
}
Comments on this paper