ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02340
  4. Cited By
SNIP: Single-shot Network Pruning based on Connection Sensitivity

SNIP: Single-shot Network Pruning based on Connection Sensitivity

4 October 2018
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
    VLM
ArXivPDFHTML

Papers citing "SNIP: Single-shot Network Pruning based on Connection Sensitivity"

50 / 709 papers shown
Title
Learning Best Combination for Efficient N:M Sparsity
Learning Best Combination for Efficient N:M Sparsity
Yuxin Zhang
Mingbao Lin
Zhihang Lin
Yiting Luo
Ke Li
Rongrong Ji
Yongjian Wu
Rongrong Ji
37
49
0
14 Jun 2022
Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Aishwarya H. Balwani
J. Krzyston
34
2
0
14 Jun 2022
A Theoretical Understanding of Neural Network Compression from Sparse
  Linear Approximation
A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation
Wenjing Yang
G. Wang
Jie Ding
Yuhong Yang
MLT
41
7
0
11 Jun 2022
DiSparse: Disentangled Sparsification for Multitask Model Compression
DiSparse: Disentangled Sparsification for Multitask Model Compression
Xing Sun
Ali Hassani
Zhangyang Wang
Gao Huang
Humphrey Shi
51
21
0
09 Jun 2022
Recall Distortion in Neural Network Pruning and the Undecayed Pruning
  Algorithm
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Aidan Good
Jia-Huei Lin
Hannah Sieg
Mikey Ferguson
Xin Yu
Shandian Zhe
J. Wieczorek
Thiago Serra
42
11
0
07 Jun 2022
Pruning for Feature-Preserving Circuits in CNNs
Pruning for Feature-Preserving Circuits in CNNs
Christopher Hamblin
Talia Konkle
G. Alvarez
33
2
0
03 Jun 2022
Masked Bayesian Neural Networks : Computation and Optimality
Insung Kong
Dongyoon Yang
Jongjin Lee
Ilsang Ohn
Yongdai Kim
TPM
30
1
0
02 Jun 2022
Bayesian Learning to Discover Mathematical Operations in Governing
  Equations of Dynamic Systems
Bayesian Learning to Discover Mathematical Operations in Governing Equations of Dynamic Systems
Hongpeng Zhou
W. Pan
18
4
0
01 Jun 2022
FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear
  Modulation
FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation
Mehmet Özgür Türkoglu
Alexander Becker
H. Gündüz
Mina Rezaei
Bernd Bischl
Rodrigo Caye Daudt
Stefano Dáronco
Jan Dirk Wegner
Konrad Schindler
FedML
UQCV
48
25
0
31 May 2022
Meta-ticket: Finding optimal subnetworks for few-shot learning within
  randomly initialized neural networks
Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Daiki Chijiwa
Shin'ya Yamaguchi
Atsutoshi Kumagai
Yasutoshi Ida
32
8
0
31 May 2022
Superposing Many Tickets into One: A Performance Booster for Sparse
  Neural Network Training
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training
Lu Yin
Vlado Menkovski
Meng Fang
Tianjin Huang
Yulong Pei
Mykola Pechenizkiy
Decebal Constantin Mocanu
Shiwei Liu
46
8
0
30 May 2022
RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch
RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch
Y. Tan
Pihe Hu
L. Pan
Jiatai Huang
Longbo Huang
OffRL
18
19
0
30 May 2022
Machine Learning for Microcontroller-Class Hardware: A Review
Machine Learning for Microcontroller-Class Hardware: A Review
Swapnil Sayan Saha
S. Sandha
Mani B. Srivastava
34
118
0
29 May 2022
Spartan: Differentiable Sparsity via Regularized Transportation
Spartan: Differentiable Sparsity via Regularized Transportation
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
34
11
0
27 May 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Tianlong Chen
Zhenyu Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zhangyang Wang
AAML
46
25
0
24 May 2022
Hyperparameter Optimization with Neural Network Pruning
Hyperparameter Optimization with Neural Network Pruning
Kangil Lee
Junho Yim
27
6
0
18 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
DD
34
21
0
17 May 2022
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms
  and Research Challenges
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges
Zhenghua Chen
Min-man Wu
Alvin Chan
Xiaoli Li
Yew-Soon Ong
24
6
0
08 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
42
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
79
18
0
04 May 2022
Federated Progressive Sparsification (Purge, Merge, Tune)+
Federated Progressive Sparsification (Purge, Merge, Tune)+
Dimitris Stripelis
Umang Gupta
Greg Ver Steeg
J. Ambite
FedML
28
9
0
26 Apr 2022
Receding Neuron Importances for Structured Pruning
Receding Neuron Importances for Structured Pruning
Mihai Suteu
Yike Guo
29
1
0
13 Apr 2022
Regularization-based Pruning of Irrelevant Weights in Deep Neural
  Architectures
Regularization-based Pruning of Irrelevant Weights in Deep Neural Architectures
Giovanni Bonetta
Matteo Ribero
R. Cancelliere
22
6
0
11 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
40
11
0
06 Apr 2022
SAFARI: Sparsity enabled Federated Learning with Limited and Unreliable
  Communications
SAFARI: Sparsity enabled Federated Learning with Limited and Unreliable Communications
Yuzhu Mao
Zihao Zhao
Meilin Yang
Le Liang
Yang Liu
Wenbo Ding
Tian-Shing Lan
Xiaoping Zhang
FedML
16
22
0
05 Apr 2022
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
Shwai He
Chenbo Jiang
Daize Dong
Liang Ding
44
5
0
05 Apr 2022
Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Aligned Weight Regularizers for Pruning Pretrained Neural Networks
J. Ó. Neill
Sourav Dutta
H. Assem
VLM
27
2
0
04 Apr 2022
REM: Routing Entropy Minimization for Capsule Networks
REM: Routing Entropy Minimization for Capsule Networks
Riccardo Renzulli
Enzo Tartaglione
Marco Grangetto
22
4
0
04 Apr 2022
Supervised Robustness-preserving Data-free Neural Network Pruning
Supervised Robustness-preserving Data-free Neural Network Pruning
Mark Huasong Meng
Guangdong Bai
Sin Gee Teo
Jin Song Dong
AAML
26
4
0
02 Apr 2022
CHEX: CHannel EXploration for CNN Model Compression
CHEX: CHannel EXploration for CNN Model Compression
Zejiang Hou
Minghai Qin
Fei Sun
Xiaolong Ma
Kun Yuan
Yi Xu
Yen-kuang Chen
Rong Jin
Yuan Xie
S. Kung
23
72
0
29 Mar 2022
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
Hongru Yang
Zhangyang Wang
MLT
47
8
0
27 Mar 2022
MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Hanlin Tang
Xipeng Zhang
Kai Liu
Jianchen Zhu
Zhanhui Kang
VLM
MQ
40
15
0
25 Mar 2022
DyRep: Bootstrapping Training with Dynamic Re-parameterization
DyRep: Bootstrapping Training with Dynamic Re-parameterization
Tao Huang
Shan You
Bohan Zhang
Yuxuan Du
Fei Wang
Chao Qian
Chang Xu
40
26
0
24 Mar 2022
Training-free Transformer Architecture Search
Training-free Transformer Architecture Search
Qinqin Zhou
Kekai Sheng
Xiawu Zheng
Ke Li
Xing Sun
Yonghong Tian
Jie Chen
Rongrong Ji
ViT
50
46
0
23 Mar 2022
Unified Visual Transformer Compression
Unified Visual Transformer Compression
Shixing Yu
Tianlong Chen
Jiayi Shen
Huan Yuan
Jianchao Tan
Sen Yang
Ji Liu
Zhangyang Wang
ViT
22
92
0
15 Mar 2022
Interspace Pruning: Using Adaptive Filter Representations to Improve
  Training of Sparse CNNs
Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
CVBM
25
20
0
15 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
46
48
0
09 Mar 2022
Dual Lottery Ticket Hypothesis
Dual Lottery Ticket Hypothesis
Yue Bai
Haiquan Wang
Zhiqiang Tao
Kunpeng Li
Yun Fu
37
37
0
08 Mar 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing
  Performance
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
41
8
0
05 Mar 2022
Structured Pruning is All You Need for Pruning CNNs at Initialization
Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai
Weizhe Hua
Hongzheng Chen
G. E. Suh
Christopher De Sa
Zhiru Zhang
CVBM
49
14
0
04 Mar 2022
LiteTransformerSearch: Training-free Neural Architecture Search for
  Efficient Language Models
LiteTransformerSearch: Training-free Neural Architecture Search for Efficient Language Models
Mojan Javaheripi
Gustavo de Rosa
Subhabrata Mukherjee
S. Shah
Tomasz Religa
C. C. T. Mendes
Sébastien Bubeck
F. Koushanfar
Debadeepta Dey
39
18
0
04 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
22
6
0
25 Feb 2022
Rare Gems: Finding Lottery Tickets at Initialization
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
32
42
0
24 Feb 2022
Prune and Tune Ensembles: Low-Cost Ensemble Learning With Sparse
  Independent Subnetworks
Prune and Tune Ensembles: Low-Cost Ensemble Learning With Sparse Independent Subnetworks
Tim Whitaker
L. D. Whitley
UQCV
21
23
0
23 Feb 2022
Reconstruction Task Finds Universal Winning Tickets
Reconstruction Task Finds Universal Winning Tickets
Ruichen Li
Binghui Li
Qi Qian
Liwei Wang
18
0
0
23 Feb 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient
  Training
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
100
47
0
20 Feb 2022
Bit-wise Training of Neural Network Weights
Bit-wise Training of Neural Network Weights
Cristian Ivan
MQ
18
0
0
19 Feb 2022
Amenable Sparse Network Investigator
Amenable Sparse Network Investigator
S. Damadi
Erfan Nouri
Hamed Pirsiavash
14
4
0
18 Feb 2022
Prospect Pruning: Finding Trainable Weights at Initialization using
  Meta-Gradients
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients
Milad Alizadeh
Shyam A. Tailor
L. Zintgraf
Joost R. van Amersfoort
Sebastian Farquhar
Nicholas D. Lane
Y. Gal
44
40
0
16 Feb 2022
Convolutional Network Fabric Pruning With Label Noise
Convolutional Network Fabric Pruning With Label Noise
Ilias Benjelloun
B. Lamiroy
E. Koudou
19
0
0
15 Feb 2022
Previous
123...8910...131415
Next