7
0

Improved Methods for Model Pruning and Knowledge Distillation

Abstract

Model pruning is a performance optimization technique for large language models like R1 or o3-mini. However, existing pruning methods often lead to significant performance degradation or require extensive retraining and fine-tuning. This technique aims to identify and remove neurons, connections unlikely leading to the contribution during the human-computer interaction phase. Our goal is to obtain a much smaller and faster knowledge distilled model that can quickly generate content almost as good as those of the unpruned ones. We propose MAMA Pruning, short for Movement and Magnitude Analysis, an improved pruning method that effectively reduces model size and computational complexity while maintaining performance comparable to the original unpruned model even at extreme pruned levels. The improved method is based on weights, bias fixed in the pre-training phase and GRPO rewards verified during the post-training phase as our novel pruning indicators. Preliminary experimental results show that our method outperforms and be comparable to state-of-the-art methods across various pruning levels and different downstream computational linguistics tasks.

View on arXiv
@article{jiang2025_2505.14052,
  title={ Improved Methods for Model Pruning and Knowledge Distillation },
  author={ Wei Jiang and Anying Fu and Youling Zhang },
  journal={arXiv preprint arXiv:2505.14052},
  year={ 2025 }
}
Comments on this paper