ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.14839
  4. Cited By
Greedy-layer Pruning: Speeding up Transformer Models for Natural
  Language Processing

Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing

31 May 2021
David Peer
Sebastian Stabinger
Stefan Engl
A. Rodríguez-Sánchez
ArXivPDFHTML

Papers citing "Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing"

4 / 4 papers shown
Title
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Shuzhou Yuan
Ercong Nie
Bolei Ma
Michael Farber
42
3
0
18 Feb 2024
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder
  Models for More Efficient Code Classification
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
Anastasiia Grishina
Max Hort
Leon Moonen
22
6
0
08 May 2023
Gradient-Free Structured Pruning with Unlabeled Data
Gradient-Free Structured Pruning with Unlabeled Data
Azade Nova
H. Dai
Dale Schuurmans
SyDa
40
20
0
07 Mar 2023
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1