ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.11261
61
5

Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix

15 October 2024
Yingyu Liang
Jiangxuan Long
Zhenmei Shi
Zhao-quan Song
Yufa Zhou
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have shown immense potential in enhancing various aspects of our daily lives, from conversational AI to search and AI assistants. However, their growing capabilities come at the cost of extremely large model sizes, making deployment on edge devices challenging due to memory and computational constraints. This paper introduces a novel approach to LLM weight pruning that directly optimizes for approximating the attention matrix, a core component of transformer architectures. Unlike existing methods that focus on linear approximations, our approach accounts for the non-linear nature of the Softmax attention mechanism. We provide theoretical guarantees for the convergence of our Gradient Descent-based optimization method to a near-optimal pruning mask solution. Our empirical results demonstrate the effectiveness of our non-linear pruning approach in maintaining model performance while significantly reducing computational costs, which is beyond the current state-of-the-art methods, i.e., SparseGPT and Wanda, by a large margin. This work establishes a new theoretical foundation for pruning algorithm design in LLMs, potentially paving the way for more efficient LLM inference on resource-constrained devices.

View on arXiv
@article{liang2025_2410.11261,
  title={ Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix },
  author={ Yingyu Liang and Jiangxuan Long and Zhenmei Shi and Zhao Song and Yufa Zhou },
  journal={arXiv preprint arXiv:2410.11261},
  year={ 2025 }
}
Comments on this paper