Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1810.02340
Cited By
SNIP: Single-shot Network Pruning based on Connection Sensitivity
4 October 2018
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SNIP: Single-shot Network Pruning based on Connection Sensitivity"
50 / 709 papers shown
Title
Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM)
Jie Bu
Arka Daw
M. Maruf
Anuj Karpatne
43
5
0
01 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
98
54
0
01 Oct 2021
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Mario Almeida
Stefanos Laskaridis
Abhinav Mehrotra
L. Dudziak
Ilias Leontiadis
Nicholas D. Lane
HAI
115
44
0
28 Sep 2021
Neural network relief: a pruning algorithm based on neural activity
Aleksandr Dekhovich
David Tax
M. Sluiter
Miguel A. Bessa
46
10
0
22 Sep 2021
Reproducibility Study: Comparing Rewinding and Fine-tuning in Neural Network Pruning
Szymon Mikler
AAML
17
2
0
20 Sep 2021
GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization
Yi Guo
Huan Yuan
Jianchao Tan
Zhangyang Wang
Sen Yang
Ji Liu
31
46
0
06 Sep 2021
Sparsifying the Update Step in Graph Neural Networks
J. Lutzeyer
Changmin Wu
Michalis Vazirgiannis
23
4
0
02 Sep 2021
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization
Yao Shu
Shaofeng Cai
Zhongxiang Dai
Beng Chin Ooi
K. H. Low
29
43
0
02 Sep 2021
AIP: Adversarial Iterative Pruning Based on Knowledge Transfer for Convolutional Neural Networks
Jingfei Chang
Yang Lu
Ping Xue
Yiqun Xu
Zhen Wei
30
0
0
31 Aug 2021
Layer-wise Model Pruning based on Mutual Information
Chun Fan
Jiwei Li
Xiang Ao
Fei Wu
Yuxian Meng
Xiaofei Sun
48
19
0
28 Aug 2021
An Information Theory-inspired Strategy for Automatic Network Pruning
Xiawu Zheng
Yuexiao Ma
Teng Xi
Gang Zhang
Errui Ding
Yuchao Li
Jie Chen
Yonghong Tian
Rongrong Ji
54
13
0
19 Aug 2021
A fast asynchronous MCMC sampler for sparse Bayesian inference
Yves F. Atchadé
Liwei Wang
13
3
0
14 Aug 2021
Towards Structured Dynamic Sparse Pre-Training of BERT
A. Dietrich
Frithjof Gressmann
Douglas Orr
Ivan Chelombiev
Daniel Justus
Carlo Luschi
30
17
0
13 Aug 2021
Model Preserving Compression for Neural Networks
Jerry Chee
Megan Flynn
Anil Damle
Chris De Sa
16
7
0
30 Jul 2021
COPS: Controlled Pruning Before Training Starts
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
23
8
0
27 Jul 2021
Over-Parameterization and Generalization in Audio Classification
Khaled Koutini
Hamid Eghbalzadeh
Florian Henkel
Jan Schluter
Gerhard Widmer
31
2
0
19 Jul 2021
Only Train Once: A One-Shot Neural Network Training And Pruning Framework
Tianyi Chen
Bo Ji
Tianyu Ding
Biyi Fang
Guanyi Wang
Zhihui Zhu
Luming Liang
Yixin Shi
Sheng Yi
Xiao Tu
26
102
0
15 Jul 2021
How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Brett W. Larsen
Stanislav Fort
Nico Becker
Surya Ganguli
UQCV
13
27
0
13 Jul 2021
Connectivity Matters: Neural Network Pruning Through the Lens of Effective Sparsity
Artem Vysogorets
Julia Kempe
29
19
0
05 Jul 2021
One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Nathan Hubens
M. Mancas
B. Gosselin
Marius Preda
T. Zaharia
23
8
0
05 Jul 2021
Why is Pruning at Initialization Immune to Reinitializing and Shuffling?
Sahib Singh
Rosanne Liu
AAML
6
1
0
05 Jul 2021
Popcorn: Paillier Meets Compression For Efficient Oblivious Neural Network Inference
Jun Wang
Chao Jin
S. Meftah
Khin Mi Mi Aung
UQCV
27
3
0
05 Jul 2021
Learned Token Pruning for Transformers
Sehoon Kim
Sheng Shen
D. Thorsley
A. Gholami
Woosuk Kwon
Joseph Hassoun
Kurt Keutzer
17
146
0
02 Jul 2021
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Xiaolong Ma
Geng Yuan
Xuan Shen
Tianlong Chen
Xuxi Chen
...
Ning Liu
Minghai Qin
Sijia Liu
Zhangyang Wang
Yanzhi Wang
30
63
0
01 Jul 2021
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
OOD
31
49
0
28 Jun 2021
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Alexandra Peste
Eugenia Iofinova
Adrian Vladu
Dan Alistarh
AI4CE
12
68
0
23 Jun 2021
Connection Sensitivity Matters for Training-free DARTS: From Architecture-Level Scoring to Operation-Level Sensitivity Analysis
Miao Zhang
Wei Huang
Li Wang
33
1
0
22 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
45
112
0
19 Jun 2021
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shin'ya Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
22
24
0
17 Jun 2021
Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win
Jaron Maene
Mingxiao Li
Marie-Francine Moens
13
15
0
13 Jun 2021
Zero-Cost Operation Scoring in Differentiable Architecture Search
Li Xiang
L. Dudziak
Mohamed S. Abdelfattah
Thomas C. P. Chau
Nicholas D. Lane
Hongkai Wen
36
5
0
12 Jun 2021
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
Cheng-I Jeff Lai
Yang Zhang
Alexander H. Liu
Shiyu Chang
Yi-Lun Liao
Yung-Sung Chuang
Kaizhi Qian
Sameer Khurana
David D. Cox
James R. Glass
VLM
75
70
0
10 Jun 2021
Distilling Image Classifiers in Object Detectors
Shuxuan Guo
J. Álvarez
Mathieu Salzmann
VLM
30
8
0
09 Jun 2021
Ex uno plures: Splitting One Model into an Ensemble of Subnetworks
Zhilu Zhang
Vianne R. Gao
M. Sabuncu
UQCV
38
6
0
09 Jun 2021
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
Tianlong Chen
Yu Cheng
Zhe Gan
Lu Yuan
Lei Zhang
Zhangyang Wang
ViT
24
216
0
08 Jun 2021
FEAR: A Simple Lightweight Method to Rank Architectures
Debadeepta Dey
Shital C. Shah
Sébastien Bubeck
OOD
32
4
0
07 Jun 2021
Efficient Lottery Ticket Finding: Less Data is More
Zhenyu Zhang
Xuxi Chen
Tianlong Chen
Zhangyang Wang
19
54
0
06 Jun 2021
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
Dinghuai Zhang
Kartik Ahuja
Yilun Xu
Yisen Wang
Aaron Courville
OOD
30
95
0
05 Jun 2021
LEAP: Learnable Pruning for Transformer-based Models
Z. Yao
Xiaoxia Wu
Linjian Ma
Sheng Shen
Kurt Keutzer
Michael W. Mahoney
Yuxiong He
30
7
0
30 May 2021
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
H. Ritter
Martin Kukla
Chen Zhang
Yingzhen Li
UQCV
BDL
63
17
0
30 May 2021
Search Spaces for Neural Model Training
Darko Stosic
Dusan Stosic
28
4
0
27 May 2021
AirNet: Neural Network Transmission over the Air
Mikolaj Jankowski
Deniz Gunduz
K. Mikolajczyk
70
1
0
24 May 2021
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
27
4
0
23 May 2021
Model Pruning Based on Quantified Similarity of Feature Maps
Zidu Wang
Xue-jun Liu
Long Huang
Yuxiang Chen
Yufei Zhang
Zhikang Lin
Rui Wang
21
16
0
13 May 2021
Dynamical Isometry: The Missing Ingredient for Neural Network Pruning
Huan Wang
Can Qin
Yue Bai
Y. Fu
13
5
0
12 May 2021
Network Pruning That Matters: A Case Study on Retraining Variants
Duong H. Le
Binh-Son Hua
29
41
0
07 May 2021
Structured Ensembles: an Approach to Reduce the Memory Footprint of Ensemble Methods
Jary Pomponi
Simone Scardapane
A. Uncini
UQCV
49
7
0
06 May 2021
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression
Baeseong Park
S. Kwon
Daehwan Oh
Byeongwook Kim
Dongsoo Lee
27
4
0
05 May 2021
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
65
56
0
03 May 2021
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
21
61
0
03 May 2021
Previous
1
2
3
...
10
11
12
13
14
15
Next