ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02340
  4. Cited By
SNIP: Single-shot Network Pruning based on Connection Sensitivity

SNIP: Single-shot Network Pruning based on Connection Sensitivity

4 October 2018
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
    VLM
ArXivPDFHTML

Papers citing "SNIP: Single-shot Network Pruning based on Connection Sensitivity"

50 / 708 papers shown
Title
When BERT Plays the Lottery, All Tickets Are Winning
When BERT Plays the Lottery, All Tickets Are Winning
Sai Prasanna
Anna Rogers
Anna Rumshisky
MILM
16
185
0
01 May 2020
Masking as an Efficient Alternative to Finetuning for Pretrained
  Language Models
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
Mengjie Zhao
Tao R. Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
33
120
0
26 Apr 2020
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Kaveena Persand
Andrew Anderson
David Gregg
14
2
0
03 Apr 2020
Understanding the Effects of Data Parallelism and Sparsity on Neural
  Network Training
Understanding the Effects of Data Parallelism and Sparsity on Neural Network Training
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
Martin Jaggi
6
2
0
25 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,032
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
26
14
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
232
383
0
05 Mar 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
9
25
0
24 Feb 2020
Gradual Channel Pruning while Training using Feature Relevance Scores
  for Convolutional Neural Networks
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks
Sai Aparna Aketi
Sourjya Roy
A. Raghunathan
Kaushik Roy
24
22
0
23 Feb 2020
Robust Pruning at Initialization
Robust Pruning at Initialization
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
8
46
0
19 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
Retrain or not retrain? -- efficient pruning methods of deep CNN
  networks
Retrain or not retrain? -- efficient pruning methods of deep CNN networks
Marcin Pietroñ
Maciej Wielgosz
CVBM
12
17
0
12 Feb 2020
PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN
  Accelerators
PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators
Zhanhong Tan
Jiebo Song
Xiaolong Ma
S. Tan
Hongyang Chen
...
Yifu Wu
Shaokai Ye
Yanzhi Wang
Dehui Li
Kaisheng Ma
38
24
0
11 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable Sparsity
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
22
241
0
08 Feb 2020
Activation Density driven Energy-Efficient Pruning in Training
Activation Density driven Energy-Efficient Pruning in Training
Timothy Foldy-Porto
Yeshwanth Venkatesha
Priyadarshini Panda
8
4
0
07 Feb 2020
Automatic Pruning for Quantized Neural Networks
Automatic Pruning for Quantized Neural Networks
Luis Guerra
Bohan Zhuang
Ian Reid
Tom Drummond
MQ
22
21
0
03 Feb 2020
Efficient and Stable Graph Scattering Transforms via Pruning
Efficient and Stable Graph Scattering Transforms via Pruning
V. Ioannidis
Siheng Chen
G. Giannakis
28
11
0
27 Jan 2020
Modeling of Pruning Techniques for Deep Neural Networks Simplification
Modeling of Pruning Techniques for Deep Neural Networks Simplification
Morteza Mousa Pasandi
M. Hajabdollahi
N. Karimi
S. Samavi
3DPC
22
18
0
13 Jan 2020
Discrimination-aware Network Pruning for Deep Model Compression
Discrimination-aware Network Pruning for Deep Model Compression
Jing Liu
Bohan Zhuang
Zhuangwei Zhuang
Yong Guo
Junzhou Huang
Jin-Hui Zhu
Mingkui Tan
CVBM
19
119
0
04 Jan 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
27
168
0
19 Dec 2019
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
26
600
0
11 Dec 2019
Explicit Group Sparse Projection with Applications to Deep Learning and
  NMF
Explicit Group Sparse Projection with Applications to Deep Learning and NMF
Riyasat Ohib
Nicolas Gillis
Niccolò Dalmasso
Sameena Shah
Vamsi K. Potluru
Sergey Plis
22
8
0
09 Dec 2019
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum
  Evaluation
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation
Matthew Shunshi Zhang
Bradly C. Stadie
8
32
0
30 Nov 2019
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
12
349
0
29 Nov 2019
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
14
588
0
25 Nov 2019
Graph Pruning for Model Compression
Graph Pruning for Model Compression
Mingyang Zhang
Xinyi Yu
Jingtao Rong
L. Ou
GNN
33
9
0
22 Nov 2019
Provable Filter Pruning for Efficient Neural Networks
Provable Filter Pruning for Efficient Neural Networks
Lucas Liebenwein
Cenk Baykal
Harry Lang
Dan Feldman
Daniela Rus
VLM
3DPC
21
140
0
18 Nov 2019
What Do Compressed Deep Neural Networks Forget?
What Do Compressed Deep Neural Networks Forget?
Sara Hooker
Aaron Courville
Gregory Clark
Yann N. Dauphin
Andrea Frome
19
181
0
13 Nov 2019
Mirror Descent View for Neural Network Quantization
Mirror Descent View for Neural Network Quantization
Thalaiyasingam Ajanthan
Kartik Gupta
Philip Torr
Richard I. Hartley
P. Dokania
MQ
22
23
0
18 Oct 2019
SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural
  Networks
SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
13
18
0
11 Oct 2019
Optimizing Speech Recognition For The Edge
Optimizing Speech Recognition For The Edge
Yuan Shangguan
Jian Li
Qiao Liang
R. Álvarez
Ian McGraw
28
64
0
26 Sep 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
38
445
0
26 Sep 2019
Class-dependent Compression of Deep Neural Networks
Class-dependent Compression of Deep Neural Networks
R. Entezari
O. Saukh
13
7
0
23 Sep 2019
RNN Architecture Learning with Sparse Regularization
RNN Architecture Learning with Sparse Regularization
Jesse Dodge
Roy Schwartz
Hao Peng
Noah A. Smith
20
10
0
06 Sep 2019
Image Captioning with Sparse Recurrent Neural Network
Image Captioning with Sparse Recurrent Neural Network
J. Tan
Chee Seng Chan
Joon Huang Chuah
VLM
29
6
0
28 Aug 2019
DeepHoyer: Learning Sparser Neural Network with Differentiable
  Scale-Invariant Sparsity Measures
DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures
Huanrui Yang
W. Wen
H. Li
16
96
0
27 Aug 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
334
0
10 Jul 2019
On improving deep learning generalization with adaptive sparse
  connectivity
On improving deep learning generalization with adaptive sparse connectivity
Shiwei Liu
Decebal Constantin Mocanu
Mykola Pechenizkiy
ODL
20
7
0
27 Jun 2019
A Signal Propagation Perspective for Pruning Neural Networks at
  Initialization
A Signal Propagation Perspective for Pruning Neural Networks at Initialization
Namhoon Lee
Thalaiyasingam Ajanthan
Stephen Gould
Philip Torr
AAML
22
152
0
14 Jun 2019
Towards Compact and Robust Deep Neural Networks
Towards Compact and Robust Deep Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
24
40
0
14 Jun 2019
Taxonomy of Saliency Metrics for Channel Pruning
Taxonomy of Saliency Metrics for Channel Pruning
Kaveena Persand
Andrew Anderson
David Gregg
26
7
0
11 Jun 2019
Weight Agnostic Neural Networks
Weight Agnostic Neural Networks
Adam Gaier
David R Ha
OOD
38
239
0
11 Jun 2019
BlockSwap: Fisher-guided Block Substitution for Network Compression on a
  Budget
BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget
Jack Turner
Elliot J. Crowley
Michael F. P. O'Boyle
Amos Storkey
Gavia Gray
20
37
0
10 Jun 2019
Sparse Transfer Learning via Winning Lottery Tickets
Sparse Transfer Learning via Winning Lottery Tickets
Rahul Mehta
UQCV
6
45
0
19 May 2019
BayesNAS: A Bayesian Approach for Neural Architecture Search
BayesNAS: A Bayesian Approach for Neural Architecture Search
Hongpeng Zhou
Minghao Yang
Jun Wang
Wei Pan
BDL
22
196
0
13 May 2019
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
Jiahui Yu
Thomas Huang
26
56
0
27 Mar 2019
How Can We Be So Dense? The Benefits of Using Highly Sparse
  Representations
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
33
96
0
27 Mar 2019
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Zahra Atashgahi
Joost Pieterse
Shiwei Liu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
35
15
0
17 Mar 2019
Stabilizing the Lottery Ticket Hypothesis
Stabilizing the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
19
103
0
05 Mar 2019
Single-shot Channel Pruning Based on Alternating Direction Method of
  Multipliers
Single-shot Channel Pruning Based on Alternating Direction Method of Multipliers
Chengcheng Li
Zehao Wang
Xiangyang Wang
Hairong Qi
19
5
0
18 Feb 2019
Previous
123...131415
Next