ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02340
  4. Cited By
SNIP: Single-shot Network Pruning based on Connection Sensitivity

SNIP: Single-shot Network Pruning based on Connection Sensitivity

4 October 2018
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
    VLM
ArXivPDFHTML

Papers citing "SNIP: Single-shot Network Pruning based on Connection Sensitivity"

50 / 709 papers shown
Title
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
  Limited Data Domains
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Yaxing Wang
Abel Gonzalez-Garcia
Chenshen Wu
Luis Herranz
Fahad Shahbaz Khan
Shangling Jui
Joost van de Weijer
32
6
0
28 Apr 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained Models
Yuxin Zhang
Mingbao Lin
Yan Wang
Rongrong Ji
Rongrong Ji
30
15
0
18 Apr 2021
Accelerating Sparse Deep Neural Networks
Accelerating Sparse Deep Neural Networks
Asit K. Mishra
J. Latorre
Jeff Pool
Darko Stosic
Dusan Stosic
Ganesh Venkatesh
Chong Yu
Paulius Micikevicius
22
221
0
16 Apr 2021
Extremely Low Footprint End-to-End ASR System for Smart Device
Extremely Low Footprint End-to-End ASR System for Smart Device
Zhifu Gao
Yiwu Yao
Shiliang Zhang
Jun Yang
Ming Lei
Ian Mcloughlin
24
12
0
06 Apr 2021
How Powerful are Performance Predictors in Neural Architecture Search?
How Powerful are Performance Predictors in Neural Architecture Search?
Colin White
Arber Zela
Binxin Ru
Yang Liu
Frank Hutter
22
126
0
02 Apr 2021
Neural Response Interpretation through the Lens of Critical Pathways
Neural Response Interpretation through the Lens of Critical Pathways
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
Seong Tae Kim
Nassir Navab
29
32
0
31 Mar 2021
The Elastic Lottery Ticket Hypothesis
The Elastic Lottery Ticket Hypothesis
Xiaohan Chen
Yu Cheng
Shuohang Wang
Zhe Gan
Jingjing Liu
Zhangyang Wang
OOD
28
34
0
30 Mar 2021
Training Sparse Neural Network by Constraining Synaptic Weight on Unit
  Lp Sphere
Training Sparse Neural Network by Constraining Synaptic Weight on Unit Lp Sphere
Weipeng Li
Xiaogang Yang
Chuanxiang Li
Ruitao Lu
Xueli Xie
14
0
0
30 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
21
38
0
20 Mar 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
35
75
0
17 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
38
64
0
11 Mar 2021
Robustness to Pruning Predicts Generalization in Deep Neural Networks
Robustness to Pruning Predicts Generalization in Deep Neural Networks
Lorenz Kuhn
Clare Lyle
Aidan Gomez
Jonas Rothfuss
Y. Gal
43
14
0
10 Mar 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
40
71
0
04 Mar 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
Decebal Constantin Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
FjORD: Fair and Accurate Federated Learning under heterogeneous targets
  with Ordered Dropout
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Samuel Horváth
Stefanos Laskaridis
Mario Almeida
Ilias Leondiadis
Stylianos I. Venieris
Nicholas D. Lane
189
268
0
26 Feb 2021
Neural Architecture Search on ImageNet in Four GPU Hours: A
  Theoretically Inspired Perspective
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
Wuyang Chen
Xinyu Gong
Zhangyang Wang
OOD
45
232
0
23 Feb 2021
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
95
35
0
16 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
59
111
0
16 Feb 2021
Scaling Up Exact Neural Network Compression by ReLU Stability
Scaling Up Exact Neural Network Compression by ReLU Stability
Thiago Serra
Xin Yu
Abhinav Kumar
Srikumar Ramalingam
16
24
0
15 Feb 2021
Neural Network Compression for Noisy Storage Devices
Neural Network Compression for Noisy Storage Devices
Berivan Isik
Kristy Choi
Xin-Yang Zheng
Tsachy Weissman
Stefano Ermon
H. P. Wong
Armin Alaghi
31
13
0
15 Feb 2021
Neural Architecture Search as Program Transformation Exploration
Neural Architecture Search as Program Transformation Exploration
Jack Turner
Elliot J. Crowley
Michael F. P. O'Boyle
35
14
0
12 Feb 2021
Dense for the Price of Sparse: Improved Performance of Sparsely
  Initialized Networks via a Subspace Offset
Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Ilan Price
Jared Tanner
29
15
0
12 Feb 2021
RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs
RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs
Zhiwei Xu
Thalaiyasingam Ajanthan
Vibhav Vineet
Richard I. Hartley
31
3
0
09 Feb 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
64
240
0
08 Feb 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured
  Sparsity in Neural Networks
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
Attilio Fiandrotti
Marco Grangetto
43
18
0
07 Feb 2021
Truly Sparse Neural Networks at Scale
Truly Sparse Neural Networks at Scale
Selima Curci
Decebal Constantin Mocanu
Mykola Pechenizkiy
40
19
0
02 Feb 2021
A Unified Paths Perspective for Pruning at Initialization
A Unified Paths Perspective for Pruning at Initialization
Thomas Gebhart
Udit Saxena
Paul Schrater
38
14
0
26 Jan 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
676
0
24 Jan 2021
Zero-Cost Proxies for Lightweight NAS
Zero-Cost Proxies for Lightweight NAS
Mohamed S. Abdelfattah
Abhinav Mehrotra
L. Dudziak
Nicholas D. Lane
30
253
0
20 Jan 2021
Slot Machines: Discovering Winning Combinations of Random Weights in
  Neural Networks
Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks
Maxwell Mbabilla Aladago
Lorenzo Torresani
33
10
0
16 Jan 2021
ACP: Automatic Channel Pruning via Clustering and Swarm Intelligence
  Optimization for CNN
ACP: Automatic Channel Pruning via Clustering and Swarm Intelligence Optimization for CNN
Jingfei Chang
Yang Lu
Ping Xue
Yiqun Xu
Zhen Wei
39
38
0
16 Jan 2021
Subformer: Exploring Weight Sharing for Parameter Efficiency in
  Generative Transformers
Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Machel Reid
Edison Marrese-Taylor
Y. Matsuo
MoE
16
48
0
01 Jan 2021
AttentionLite: Towards Efficient Self-Attention Models for Vision
AttentionLite: Towards Efficient Self-Attention Models for Vision
Souvik Kundu
Sairam Sundaresan
24
22
0
21 Dec 2020
Efficient CNN-LSTM based Image Captioning using Neural Network
  Compression
Efficient CNN-LSTM based Image Captioning using Neural Network Compression
Harshit Rampal
Aman Mohanty
VLM
19
3
0
17 Dec 2020
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
35
50
0
16 Dec 2020
Perceptron Theory Can Predict the Accuracy of Neural Networks
Perceptron Theory Can Predict the Accuracy of Neural Networks
Denis Kleyko
A. Rosato
E. P. Frady
Massimo Panella
Friedrich T. Sommer
GNN
36
10
0
14 Dec 2020
The Lottery Tickets Hypothesis for Supervised and Self-supervised
  Pre-training in Computer Vision Models
The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Michael Carbin
Zhangyang Wang
27
123
0
12 Dec 2020
The Lottery Ticket Hypothesis for Object Recognition
The Lottery Ticket Hypothesis for Object Recognition
Sharath Girish
Shishira R. Maiya
Kamal Gupta
Hao Chen
L. Davis
Abhinav Shrivastava
83
60
0
08 Dec 2020
Quick and Robust Feature Selection: the Strength of Energy-efficient
  Sparse Training for Autoencoders
Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders
Zahra Atashgahi
Ghada Sokar
T. Lee
Elena Mocanu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
19
37
0
01 Dec 2020
Deconstructing the Structure of Sparse Neural Networks
Deconstructing the Structure of Sparse Neural Networks
M. V. Gelder
Mitchell Wortsman
Kiana Ehsani
11
1
0
30 Nov 2020
FreezeNet: Full Performance by Reduced Storage Costs
FreezeNet: Full Performance by Reduced Storage Costs
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
31
13
0
28 Nov 2020
MetaGater: Fast Learning of Conditional Channel Gated Networks via
  Federated Meta-Learning
MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning
Sen Lin
Li Yang
Zhezhi He
Deliang Fan
Junshan Zhang
FedML
AI4CE
25
5
0
25 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
26
25
0
20 Nov 2020
Layer-Wise Data-Free CNN Compression
Layer-Wise Data-Free CNN Compression
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
MQ
24
17
0
18 Nov 2020
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural
  networks
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
Attilio Fiandrotti
Marco Grangetto
ODL
UQCV
20
34
0
16 Nov 2020
Using noise to probe recurrent neural network structure and prune
  synapses
Using noise to probe recurrent neural network structure and prune synapses
Eli Moore
Rishidev Chaudhuri
6
5
0
14 Nov 2020
Efficient Knowledge Distillation for RNN-Transducer Models
Efficient Knowledge Distillation for RNN-Transducer Models
S. Panchapagesan
Daniel S. Park
Chung-Cheng Chiu
Yuan Shangguan
Qiao Liang
A. Gruenstein
26
53
0
11 Nov 2020
Effective Model Compression via Stage-wise Pruning
Effective Model Compression via Stage-wise Pruning
Mingyang Zhang
Xinyi Yu
Jingtao Rong
L. Ou
SyDa
21
1
0
10 Nov 2020
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
Zhengyan Zhang
Fanchao Qi
Zhiyuan Liu
Qun Liu
Maosong Sun
VLM
46
30
0
07 Nov 2020
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of
  DNNs
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs
Souvik Kundu
M. Nazemi
P. Beerel
Massoud Pedram
AAML
18
67
0
03 Nov 2020
Previous
123...1112131415
Next