Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.14839
Cited By
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing
31 May 2021
David Peer
Sebastian Stabinger
Stefan Engl
A. Rodríguez-Sánchez
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing"
4 / 4 papers shown
Title
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Shuzhou Yuan
Ercong Nie
Bolei Ma
Michael Farber
42
3
0
18 Feb 2024
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
Anastasiia Grishina
Max Hort
Leon Moonen
22
6
0
08 May 2023
Gradient-Free Structured Pruning with Unlabeled Data
Azade Nova
H. Dai
Dale Schuurmans
SyDa
40
20
0
07 Mar 2023
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1