Pangu Light: Weight Re-Initialization for Pruning and Accelerating LLMs

Large Language Models (LLMs) deliver state-of-the-art capabilities across numerous tasks, but their immense size and inference costs pose significant computational challenges for practical deployment. While structured pruning offers a promising avenue for model compression, existing methods often struggle with the detrimental effects of aggressive, simultaneous width and depth reductions, leading to substantial performance degradation. This paper argues that a critical, often overlooked, aspect in making such aggressive joint pruning viable is the strategic re-initialization and adjustment of remaining weights to improve the model post-pruning training accuracies. We introduce Pangu Light, a framework for LLM acceleration centered around structured pruning coupled with novel weight re-initialization techniques designed to address this ``missing piece''. Our framework systematically targets multiple axes, including model width, depth, attention heads, and RMSNorm, with its effectiveness rooted in novel re-initialization methods like Cross-Layer Attention Pruning (CLAP) and Stabilized LayerNorm Pruning (SLNP) that mitigate performance drops by providing the network a better training starting point. Further enhancing efficiency, Pangu Light incorporates specialized optimizations such as absorbing Post-RMSNorm computations and tailors its strategies to Ascend NPU characteristics. The Pangu Light models consistently exhibit a superior accuracy-efficiency trade-off, outperforming prominent baseline pruning methods like Nemotron and established LLMs like Qwen3 series. For instance, on Ascend NPUs, Pangu Light-32B's 81.6 average score and 2585 tokens/s throughput exceed Qwen3-32B's 80.9 average score and 2225 tokens/s.
View on arXiv@article{chen2025_2505.20155, title={ Pangu Light: Weight Re-Initialization for Pruning and Accelerating LLMs }, author={ Hanting Chen and Jiarui Qin and Jialong Guo and Tao Yuan and Yichun Yin and Huiling Zhen and Yasheng Wang and Jinpeng Li and Xiaojun Meng and Meng Zhang and Rongju Ruan and Zheyuan Bai and Yehui Tang and Can Chen and Xinghao Chen and Fisher Yu and Ruiming Tang and Yunhe Wang }, journal={arXiv preprint arXiv:2505.20155}, year={ 2025 } }