15
0

TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks

Main:10 Pages
6 Figures
Bibliography:4 Pages
7 Tables
Appendix:10 Pages
Abstract

Network pruning reduces the computational requirements of large neural networks, with N:M sparsity -- retaining only N out of every M consecutive weights -- offering a compelling balance between compressed model quality and hardware acceleration. However, N:M sparsity only accelerates forward-pass computations, as N:M patterns are not preserved during matrix transposition, limiting efficiency during training where both passes are computationally intensive. While transposable N:M sparsity has been proposed to address this limitation, existing methods for finding transposable N:M sparse masks either fail to scale to large models or are restricted to M=4 which results in suboptimal compression-accuracy trade-off. We introduce an efficient solver for transposable N:M masks that scales to billion-parameter models. We formulate mask generation as optimal transport problems and solve through entropy regularization and Dykstra's algorithm, followed by a rounding procedure. Our tensor-based implementation exploits GPU parallelism, achieving up to 100x speedup with only 1-10% error compared to existing methods. Our approach can be integrated with layer-wise N:M pruning frameworks including Wanda, SparseGPT and ALPS to produce transposable N:M sparse models with arbitrary N:M values. Experiments show that LLaMA3.2-8B with transposable 16:32 sparsity maintains performance close to its standard N:M counterpart and outperforms standard 2:4 sparse model, showing the practical value of our approach.

View on arXiv
@article{meng2025_2505.23949,
  title={ TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks },
  author={ Xiang Meng and Mehdi Makni and Rahul Mazumder },
  journal={arXiv preprint arXiv:2505.23949},
  year={ 2025 }
}
Comments on this paper