ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.07634
32
8

Gradient-based Intra-attention Pruning on Pre-trained Language Models

15 December 2022
Ziqing Yang
Yiming Cui
Xin Yao
Shijin Wang
    VLM
ArXivPDFHTML
Abstract

Pre-trained language models achieve superior performance but are computationally expensive. Techniques such as pruning and knowledge distillation have been developed to reduce their sizes and latencies. In this work, we propose a structured pruning method GRAIN (Gradient-based Intra-attention pruning), which performs task-specific pruning with knowledge distillation and yields highly effective models. Different from common approaches that prune each attention head as a whole, GRAIN inspects and prunes intra-attention structures, which greatly expands the structure search space and enables more flexible models. We also propose a gradient separation strategy that reduces the interference of distillation on pruning for a better combination of the two approaches. Experiments on GLUE, SQuAD, and CoNLL 2003 show that GRAIN notably outperforms other methods, especially in the high sparsity regime, and achieves 6∼7×6\sim7\times6∼7× speedups while maintaining 93%∼99%93\%\sim99\%93%∼99% performance. Under extreme compression where only 3%3\%3% transformer weights remain, the pruned model is still competitive compared to larger models.

View on arXiv
Comments on this paper