ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03252
  4. Cited By
Layer-wise Pruning of Transformer Attention Heads for Efficient Language
  Modeling

Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling

7 October 2021
Kyuhong Shim
Iksoo Choi
Wonyong Sung
Jungwook Choi
ArXivPDFHTML

Papers citing "Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling"

2 / 2 papers shown
Title
Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
Zayd Muhammad Kawakibi Zuhri
Erland Hilman Fuadi
Alham Fikri Aji
33
0
0
29 Apr 2025
Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Kyuhong Shim
Jungwook Choi
Wonyong Sung
ViT
24
3
0
29 Jan 2023
1