ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04992
158
3
v1v2v3v4 (latest)

Wanda++: Pruning Large Language Models via Regional Gradients

6 March 2025
Yifan Yang
Kai Zhen
Bhavana Ganesh
Aram Galstyan
Goeric Huybrechts
Markus Müller
Jonas M. Kübler
Rupak Vignesh Swaminathan
Athanasios Mouchtaris
S. Bodapati
Nathan Susanj
Zheng Zhang
Jack FitzGerald
Abhishek Kumar
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:2 Pages
10 Tables
Appendix:2 Pages
Abstract

Large Language Models (LLMs) pruning seeks to remove unimportant weights for inference speedup with minimal accuracy impact. However, existing methods often suffer from accuracy degradation without full-model sparsity-aware fine-tuning. This paper presents Wanda++, a novel pruning framework that outperforms the state-of-the-art methods by utilizing decoder-block-level \textbf{regional} gradients. Specifically, Wanda++ improves the pruning score with regional gradients for the first time and proposes an efficient regional optimization method to minimize pruning-induced output discrepancies between the dense and sparse decoder output. Notably, Wanda++ improves perplexity by up to 32\% over Wanda in the language modeling task and generalizes effectively to downstream tasks. Moreover, despite updating weights with regional optimization, Wanda++ remains orthogonal to sparsity-aware fine-tuning, further reducing perplexity with LoRA in great extend. Our approach is lightweight, pruning a 7B LLaMA model in under 10 minutes on a single H100 GPU.

View on arXiv
@article{yang2025_2503.04992,
  title={ Wanda++: Pruning Large Language Models via Regional Gradients },
  author={ Yifan Yang and Kai Zhen and Bhavana Ganesh and Aram Galstyan and Goeric Huybrechts and Markus Müller and Jonas M. Kübler and Rupak Vignesh Swaminathan and Athanasios Mouchtaris and Sravan Babu Bodapati and Nathan Susanj and Zheng Zhang and Jack FitzGerald and Abhishek Kumar },
  journal={arXiv preprint arXiv:2503.04992},
  year={ 2025 }
}
Comments on this paper