Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.10771
Cited By
Importance Estimation for Neural Network Pruning
25 June 2019
Pavlo Molchanov
Arun Mallya
Stephen Tyree
I. Frosio
Jan Kautz
3DPC
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Importance Estimation for Neural Network Pruning"
50 / 439 papers shown
Title
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness
Tianlong Chen
Huan Zhang
Zhenyu (Allen) Zhang
Shiyu Chang
Sijia Liu
Pin-Yu Chen
Zhangyang Wang
AAML
11
11
0
15 Jun 2022
DiSparse: Disentangled Sparsification for Multitask Model Compression
Xing Sun
Ali Hassani
Zhangyang Wang
Gao Huang
Humphrey Shi
34
21
0
09 Jun 2022
Neural Network Compression via Effective Filter Analysis and Hierarchical Pruning
Ziqi Zhou
Li Lian
Yilong Yin
Ze Wang
16
1
0
07 Jun 2022
RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch
Y. Tan
Pihe Hu
L. Pan
Jiatai Huang
Longbo Huang
OffRL
10
19
0
30 May 2022
Towards Communication-Learning Trade-off for Federated Learning at the Network Edge
Jian-ji Ren
Wanli Ni
Hui Tian
FedML
23
15
0
27 May 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Tianlong Chen
Zhenyu (Allen) Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zhangyang Wang
AAML
46
25
0
24 May 2022
The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains
Haoran Xu
Philipp Koehn
Kenton W. Murray
MoMe
21
4
0
23 May 2022
Parameter-Efficient Sparsity for Large Language Models Fine-Tuning
Yuchao Li
Fuli Luo
Chuanqi Tan
Mengdi Wang
Songfang Huang
Shen Li
Junjie Bai
MQ
57
0
0
23 May 2022
Dataset Pruning: Reducing Training Data by Examining Generalization Influence
Shuo Yang
Zeke Xie
Hanyu Peng
Minjing Xu
Mingming Sun
P. Li
DD
155
107
0
19 May 2022
Binarizing by Classification: Is soft function really necessary?
Yefei He
Luoming Zhang
Weijia Wu
Hong Zhou
MQ
23
3
0
16 May 2022
Revisiting Random Channel Pruning for Neural Network Compression
Yawei Li
Kamil Adamczewski
Wen Li
Shuhang Gu
Radu Timofte
Luc Van Gool
24
81
0
11 May 2022
Robust Learning of Parsimonious Deep Neural Networks
Valentin Frank Ingmar Guenter
Athanasios Sideris
29
2
0
10 May 2022
Domino Saliency Metrics: Improving Existing Channel Saliency Metrics with Structural Information
Kaveena Persand
Andrew Anderson
David Gregg
20
0
0
04 May 2022
Compact Neural Networks via Stacking Designed Basic Units
Weichao Lan
Y. Cheung
Juyong Jiang
35
0
0
03 May 2022
Attentive Fine-Grained Structured Sparsity for Image Restoration
Junghun Oh
Heewon Kim
Seungjun Nah
Chee Hong
Jonghyun Choi
Kyoung Mu Lee
21
18
0
26 Apr 2022
Merging of neural networks
Martin Pasen
Vladimír Boza
FedML
MoMe
30
2
0
21 Apr 2022
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Simiao Zuo
Qingru Zhang
Chen Liang
Pengcheng He
T. Zhao
Weizhu Chen
MoE
24
38
0
15 Apr 2022
E^2TAD: An Energy-Efficient Tracking-based Action Detector
Xin Hu
Zhenyu Wu
Haoyuan Miao
Siqi Fan
Taiyu Long
...
Pengcheng Pi
Yi Wu
Zhou Ren
Zhangyang Wang
G. Hua
24
2
0
09 Apr 2022
REM: Routing Entropy Minimization for Capsule Networks
Riccardo Renzulli
Enzo Tartaglione
Marco Grangetto
14
4
0
04 Apr 2022
Structured Pruning Learns Compact and Accurate Models
Mengzhou Xia
Zexuan Zhong
Danqi Chen
VLM
9
177
0
01 Apr 2022
CHEX: CHannel EXploration for CNN Model Compression
Zejiang Hou
Minghai Qin
Fei Sun
Xiaolong Ma
Kun Yuan
Yi Xu
Yen-kuang Chen
Rong Jin
Yuan Xie
S. Kung
16
71
0
29 Mar 2022
A Fast Post-Training Pruning Framework for Transformers
Woosuk Kwon
Sehoon Kim
Michael W. Mahoney
Joseph Hassoun
Kurt Keutzer
A. Gholami
29
144
0
29 Mar 2022
Vision Transformer Compression with Structured Pruning and Low Rank Approximation
Ankur Kumar
ViT
28
6
0
25 Mar 2022
Language Adaptive Cross-lingual Speech Representation Learning with Sparse Sharing Sub-networks
Yizhou Lu
Mingkun Huang
Xinghua Qu
Pengfei Wei
Zejun Ma
27
19
0
09 Mar 2022
Low-Cost On-device Partial Domain Adaptation (LoCO-PDA): Enabling efficient CNN retraining on edge devices
A. Rajagopal
C. Bouganis
25
0
0
01 Mar 2022
Optimal channel selection with discrete QCQP
Yeonwoo Jeong
Deokjae Lee
Gaon An
Changyong Son
Hyun Oh Song
16
1
0
24 Feb 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu (Allen) Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
90
47
0
20 Feb 2022
Practical Network Acceleration with Tiny Sets
G. Wang
Jianxin Wu
35
8
0
16 Feb 2022
Convolutional Network Fabric Pruning With Label Noise
Ilias Benjelloun
B. Lamiroy
E. Koudou
16
0
0
15 Feb 2022
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Tianlong Chen
Xuxi Chen
Xiaolong Ma
Yanzhi Wang
Zhangyang Wang
21
35
0
09 Feb 2022
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models
Chen Liang
Haoming Jiang
Simiao Zuo
Pengcheng He
Xiaodong Liu
Jianfeng Gao
Weizhu Chen
T. Zhao
17
14
0
06 Feb 2022
PRUNIX: Non-Ideality Aware Convolutional Neural Network Pruning for Memristive Accelerators
Ali Alshaarawy
A. Amirsoleimani
R. Genov
13
1
0
03 Feb 2022
Signing the Supermask: Keep, Hide, Invert
Nils Koster
O. Grothe
Achim Rettinger
31
10
0
31 Jan 2022
GhostNets on Heterogeneous Devices via Cheap Operations
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chunjing Xu
Enhua Wu
Qi Tian
19
102
0
10 Jan 2022
LegoDNN: Block-grained Scaling of Deep Neural Networks for Mobile Vision
Rui Han
Qinglong Zhang
C. Liu
Guoren Wang
Jian Tang
L. Chen
21
44
0
18 Dec 2021
SNF: Filter Pruning via Searching the Proper Number of Filters
Pengkun Liu
Yaru Yue
Yanjun Guo
Xingxiang Tao
Xiaoguang Zhou
3DPC
28
0
0
14 Dec 2021
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation
Raymond Li
Wen Xiao
Linzi Xing
Lanjun Wang
Gabriel Murray
Giuseppe Carenini
ViT
27
7
0
10 Dec 2021
Effective dimension of machine learning models
Amira Abbas
David Sutter
Alessio Figalli
Stefan Woerner
82
17
0
09 Dec 2021
Batch Normalization Tells You Which Filter is Important
Junghun Oh
Heewon Kim
Sungyong Baik
Chee Hong
Kyoung Mu Lee
CVBM
30
8
0
02 Dec 2021
Optimizing for In-memory Deep Learning with Emerging Memory Technology
Zhehui Wang
Tao Luo
Rick Siow Mong Goh
Wei Zhang
Weng-Fai Wong
18
1
0
01 Dec 2021
Automatic Neural Network Pruning that Efficiently Preserves the Model Accuracy
Thibault Castells
Seul-Ki Yeom
3DV
18
3
0
18 Nov 2021
Stacked BNAS: Rethinking Broad Convolutional Neural Network for Neural Architecture Search
Zixiang Ding
Yaran Chen
Nannan Li
Dong Zhao
C. L. Philip Chen
19
8
0
15 Nov 2021
HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks
Mojan Javaheripi
F. Koushanfar
18
22
0
02 Nov 2021
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Xuxi Chen
Tianlong Chen
Weizhu Chen
Ahmed Hassan Awadallah
Zhangyang Wang
Yu Cheng
MoE
ALM
20
10
0
30 Oct 2021
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Yang Sui
Miao Yin
Yi Xie
Huy Phan
S. Zonouz
Bo Yuan
VLM
33
128
0
26 Oct 2021
Reconstructing Pruned Filters using Cheap Spatial Transformations
Roy Miles
K. Mikolajczyk
26
0
0
25 Oct 2021
Exploring Gradient Flow Based Saliency for DNN Model Compression
Xinyu Liu
Baopu Li
Zhen Chen
Yixuan Yuan
FAtt
11
9
0
24 Oct 2021
When to Prune? A Policy towards Early Structural Pruning
Maying Shen
Pavlo Molchanov
Hongxu Yin
J. Álvarez
VLM
28
53
0
22 Oct 2021
CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization
Wenzheng Hu
Zhengping Che
Ning Liu
Mingyang Li
Jian Tang
Changshui Zhang
Jianqiang Wang
22
22
0
21 Oct 2021
HALP: Hardware-Aware Latency Pruning
Maying Shen
Hongxu Yin
Pavlo Molchanov
Lei Mao
Jianna Liu
J. Álvarez
VLM
46
13
0
20 Oct 2021
Previous
1
2
3
4
5
6
7
8
9
Next