11
0

MaskPro: Linear-Space Probabilistic Learning for Strict (N:M)-Sparsity on Large Language Models

Main:9 Pages
7 Figures
Bibliography:3 Pages
7 Tables
Appendix:12 Pages
Abstract

The rapid scaling of large language models (LLMs) has made inference efficiency a primary bottleneck in the practical deployment. To address this, semi-structured sparsity offers a promising solution by strategically retaining NN elements out of every MM weights, thereby enabling hardware-friendly acceleration and reduced memory. However, existing (N:M)-compatible approaches typically fall into two categories: rule-based layerwise greedy search, which suffers from considerable errors, and gradient-driven combinatorial learning, which incurs prohibitive training costs. To tackle these challenges, we propose a novel linear-space probabilistic framework named MaskPro, which aims to learn a prior categorical distribution for every MM consecutive weights and subsequently leverages this distribution to generate the (N:M)-sparsity throughout an NN-way sampling without replacement. Furthermore, to mitigate the training instability induced by the high variance of policy gradients in the super large combinatorial space, we propose a novel update method by introducing a moving average tracker of loss residuals instead of vanilla loss. Finally, we conduct comprehensive theoretical analysis and extensive experiments to validate the superior performance of MaskPro, as well as its excellent scalability in memory efficiency and exceptional robustness to data samples. Our code is available atthis https URL.

View on arXiv
@article{sun2025_2506.12876,
  title={ MaskPro: Linear-Space Probabilistic Learning for Strict (N:M)-Sparsity on Large Language Models },
  author={ Yan Sun and Qixin Zhang and Zhiyuan Yu and Xikun Zhang and Li Shen and Dacheng Tao },
  journal={arXiv preprint arXiv:2506.12876},
  year={ 2025 }
}
Comments on this paper