Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.16890
Cited By
NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer
24 April 2024
Zhu Liao
Victor Quétu
Van-Tam Nguyen
Enzo Tartaglione
Re-assign community
ArXiv
PDF
HTML
Papers citing
"NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer"
6 / 6 papers shown
Title
Entropy-Based Block Pruning for Efficient Large Language Models
Liangwei Yang
Yuhui Xu
Juntao Tan
Doyen Sahoo
Shri Kiran Srinivasan
Caiming Xiong
Han Wang
Shelby Heinecke
AAML
30
0
0
04 Apr 2025
LaCoOT: Layer Collapse through Optimal Transport
Victor Quétu
Nour Hezbri
Enzo Tartaglione
34
0
0
13 Jun 2024
DSD
2
^2
2
: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
32
7
0
02 Mar 2023
CMT: Convolutional Neural Networks Meet Vision Transformers
Jianyuan Guo
Kai Han
Han Wu
Yehui Tang
Chunjing Xu
Yunhe Wang
Chang Xu
ViT
351
500
0
13 Jul 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
Attilio Fiandrotti
Marco Grangetto
40
18
0
07 Feb 2021
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
1