Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.07611
Cited By
Layer-adaptive sparsity for the Magnitude-based Pruning
15 October 2020
Jaeho Lee
Sejun Park
Sangwoo Mo
Sungsoo Ahn
Jinwoo Shin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Layer-adaptive sparsity for the Magnitude-based Pruning"
28 / 28 papers shown
Title
The Larger the Merrier? Efficient Large AI Model Inference in Wireless Edge Networks
Zhonghao Lyu
Ming Xiao
Jie Xu
Mikael Skoglund
Marco Di Renzo
33
0
0
14 May 2025
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
178
0
0
03 May 2025
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Yupei Li
M. Milling
Björn Schuller
AI4CE
107
0
0
27 Mar 2025
Sparse High Rank Adapters
K. Bhardwaj
N. Pandey
Sweta Priyadarshi
Viswanath Ganapathy
Rafael Esteves
...
P. Whatmough
Risheek Garrepalli
M. V. Baalen
Harris Teague
Markus Nagel
MQ
43
4
0
28 Jan 2025
Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks through Meta-learning
Richa Upadhyay
Ronald Phlypo
Rajkumar Saini
Marcus Liwicki
42
0
0
21 Jan 2025
Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak
Seongrok Moon
Joohwan Ko
PooGyeon Park
30
0
0
05 Nov 2024
Straightforward Layer-wise Pruning for More Efficient Visual Adaptation
Ruizi Han
Jinglei Tang
58
1
0
19 Jul 2024
LPViT: Low-Power Semi-structured Pruning for Vision Transformers
Kaixin Xu
Zhe Wang
Chunyun Chen
Xue Geng
Jie Lin
Xulei Yang
Min-man Wu
Min Wu
Xiaoli Li
Weisi Lin
ViT
VLM
51
7
0
02 Jul 2024
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes
Ruihao Gong
Yang Yong
Zining Wang
Jinyang Guo
Xiuying Wei
Yuqing Ma
Xianglong Liu
54
5
0
09 May 2024
Light Field Compression Based on Implicit Neural Representation
Henan Wang
Hanxi Zhu
Zhibo Chen
45
0
0
07 May 2024
Towards Meta-Pruning via Optimal Transport
Alexander Theus
Olin Geimer
Friedrich Wicke
Thomas Hofmann
Sotiris Anagnostidis
Sidak Pal Singh
MoMe
24
3
0
12 Feb 2024
Stochastic Subnetwork Annealing: A Regularization Technique for Fine Tuning Pruned Subnetworks
Tim Whitaker
Darrell Whitley
35
0
0
16 Jan 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
55
10
0
23 Dec 2023
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Lu Yin
You Wu
Zhenyu Zhang
Cheng-Yu Hsieh
Yaqing Wang
...
Mykola Pechenizkiy
Yi Liang
Michael Bendersky
Zhangyang Wang
Shiwei Liu
36
79
0
08 Oct 2023
My3DGen: A Scalable Personalized 3D Generative Model
Luchao Qi
Jiaye Wu
Annie N. Wang
Sheng-Yu Wang
Roni Sengupta
3DH
46
3
0
11 Jul 2023
A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness
Jovon Craig
Joshua Andle
Theodore S. Nowak
Salimeh Yasaei Sekeh
AAML
47
0
0
07 Jul 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
37
1
0
22 Feb 2023
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Mohammad AkbarTajari
S. Rajaee
Mohammad Taher Pilehvar
19
2
0
01 Feb 2023
M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion
Yangyi Liu
Stefano Rini
Sadaf Salehkalaibar
Jun Chen
FedML
21
4
0
23 Jan 2023
AP: Selective Activation for De-sparsifying Pruned Neural Networks
Shiyu Liu
Rohan Ghosh
Dylan Tan
Mehul Motani
AAML
26
0
0
09 Dec 2022
SNIPER Training: Single-Shot Sparse Training for Text-to-Speech
Perry Lam
Huayun Zhang
Nancy F. Chen
Berrak Sisman
Dorien Herremans
VLM
30
0
0
14 Nov 2022
Advancing Model Pruning via Bi-level Optimization
Yihua Zhang
Yuguang Yao
Parikshit Ram
Pu Zhao
Tianlong Chen
Min-Fong Hong
Yanzhi Wang
Sijia Liu
51
68
0
08 Oct 2022
MiNL: Micro-images based Neural Representation for Light Fields
Ziru Xu
Henan Wang
Zhibo Chen
33
1
0
17 Sep 2022
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs
Bo-Kyeong Kim
Shinkook Choi
Hancheol Park
21
4
0
29 Jun 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
31
9
0
24 May 2022
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
95
35
0
16 Feb 2021
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
1