ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07605
45
0

SEAP: Training-free Sparse Expert Activation Pruning Unlock the Brainpower of Large Language Models

10 March 2025
Xun Liang
Hanyu Wang
Huayi Lai
Simin Niu
Shichao Song
Jiawei Yang
Jihao Zhao
Feiyu Xiong
Bo Tang
Z. Li
    VLM
ArXivPDFHTML
Abstract

Large Language Models have achieved remarkable success across various natural language processing tasks, yet their high computational cost during inference remains a major bottleneck. This paper introduces Sparse Expert Activation Pruning (SEAP), a training-free pruning method that selectively retains task-relevant parameters to reduce inference overhead. Inspired by the clustering patterns of hidden states and activations in LLMs, SEAP identifies task-specific expert activation patterns and prunes the model while preserving task performance and enhancing computational efficiency. Experimental results demonstrate that SEAP significantly reduces computational overhead while maintaining competitive accuracy. Notably, at 50% pruning, SEAP surpasses both WandA and FLAP by over 20%, and at 20% pruning, it incurs only a 2.2% performance drop compared to the dense model. These findings highlight SEAP's scalability and effectiveness, making it a promising approach for optimizing large-scale LLMs.

View on arXiv
@article{liang2025_2503.07605,
  title={ SEAP: Training-free Sparse Expert Activation Pruning Unlock the Brainpower of Large Language Models },
  author={ Xun Liang and Hanyu Wang and Huayi Lai and Simin Niu and Shichao Song and Jiawei Yang and Jihao Zhao and Feiyu Xiong and Bo Tang and Zhiyu Li },
  journal={arXiv preprint arXiv:2503.07605},
  year={ 2025 }
}
Comments on this paper