ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.05754
  4. Cited By
Prune Once for All: Sparse Pre-Trained Language Models

Prune Once for All: Sparse Pre-Trained Language Models

10 November 2021
Ofir Zafrir
Ariel Larey
Guy Boudoukh
Haihao Shen
Moshe Wasserblat
    VLM
ArXiv (abs)PDFHTML

Papers citing "Prune Once for All: Sparse Pre-Trained Language Models"

9 / 59 papers shown
Title
Fast DistilBERT on CPUs
Fast DistilBERT on CPUs
Haihao Shen
Ofir Zafrir
Bo Dong
Hengyu Meng
Xinyu. Ye
Zhe Wang
Yi Ding
Hanwen Chang
Guy Boudoukh
Moshe Wasserblat
VLM
67
2
0
27 Oct 2022
GMP*: Well-Tuned Gradual Magnitude Pruning Can Outperform Most
  BERT-Pruning Methods
GMP*: Well-Tuned Gradual Magnitude Pruning Can Outperform Most BERT-Pruning Methods
Eldar Kurtic
Dan Alistarh
AI4MH
59
15
0
12 Oct 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
163
114
0
31 Aug 2022
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of
  Weight Importance
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Qingru Zhang
Simiao Zuo
Chen Liang
Alexander Bukharin
Pengcheng He
Weizhu Chen
T. Zhao
81
81
0
25 Jun 2022
Sparse*BERT: Sparse Models Generalize To New tasks and Domains
Sparse*BERT: Sparse Models Generalize To New tasks and Domains
Daniel Fernando Campos
Alexandre Marques
Tuan Nguyen
Mark Kurtz
Chengxiang Zhai
45
1
0
25 May 2022
Structured Pruning Learns Compact and Accurate Models
Structured Pruning Learns Compact and Accurate Models
Mengzhou Xia
Zexuan Zhong
Danqi Chen
VLM
124
189
0
01 Apr 2022
LightHuBERT: Lightweight and Configurable Speech Representation Learning
  with Once-for-All Hidden-Unit BERT
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
Rui Wang
Qibing Bai
Junyi Ao
Long Zhou
Zhixiang Xiong
Zhihua Wei
Yu Zhang
Tom Ko
Haizhou Li
72
65
0
29 Mar 2022
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for
  Large Language Models
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models
Eldar Kurtic
Daniel Fernando Campos
Tuan Nguyen
Elias Frantar
Mark Kurtz
Ben Fineran
Michael Goin
Dan Alistarh
VLMMQMedIm
122
127
0
14 Mar 2022
LiteTransformerSearch: Training-free Neural Architecture Search for
  Efficient Language Models
LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models
Mojan Javaheripi
Gustavo de Rosa
Subhabrata Mukherjee
S. Shah
Tomasz Religa
C. C. T. Mendes
Sébastien Bubeck
F. Koushanfar
Debadeepta Dey
64
20
0
04 Mar 2022
Previous
12