ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.04622
  4. Cited By
A Closer Look at Structured Pruning for Neural Network Compression
v1v2v3 (latest)

A Closer Look at Structured Pruning for Neural Network Compression

10 October 2018
Elliot J. Crowley
Jack Turner
Amos Storkey
Michael F. P. O'Boyle
    3DPC
ArXiv (abs)PDFHTMLGithub (141★)

Papers citing "A Closer Look at Structured Pruning for Neural Network Compression"

27 / 27 papers shown
Title
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
161
761
0
25 Feb 2019
Rethinking ImageNet Pre-training
Rethinking ImageNet Pre-training
Kaiming He
Ross B. Girshick
Piotr Dollár
VLMSSeg
130
1,086
0
21 Nov 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,474
0
11 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
266
1,207
0
04 Oct 2018
DARTS: Differentiable Architecture Search
DARTS: Differentiable Architecture Search
Hanxiao Liu
Karen Simonyan
Yiming Yang
204
4,366
0
24 Jun 2018
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
138
521
0
09 Apr 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
245
3,485
0
09 Mar 2018
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li Li
Song Han
98
1,349
0
10 Feb 2018
Residual Connections Encourage Iterative Inference
Residual Connections Encourage Iterative Inference
Stanislaw Jastrzebski
Devansh Arpit
Nicolas Ballas
Vikas Verma
Tong Che
Yoshua Bengio
57
155
0
13 Oct 2017
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
125
2,426
0
22 Aug 2017
Learning Transferable Architectures for Scalable Image Recognition
Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph
Vijay Vasudevan
Jonathon Shlens
Quoc V. Le
186
5,607
0
21 Jul 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
204
2,529
0
19 Jul 2017
Bayesian Compression for Deep Learning
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCVBDL
166
481
0
24 May 2017
In-Datacenter Performance Analysis of a Tensor Processing Unit
In-Datacenter Performance Analysis of a Tensor Processing Unit
N. Jouppi
C. Young
Nishant Patil
David Patterson
Gaurav Agrawal
...
Vijay Vasudevan
Richard Walter
Walter Wang
Eric Wilcox
Doe Hyun Yoon
235
4,638
0
16 Apr 2017
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Vivienne Sze
Yu-hsin Chen
Tien-Ju Yang
J. Emer
AAML3DV
120
3,026
0
27 Mar 2017
Variational Dropout Sparsifies Deep Neural Networks
Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
147
831
0
19 Jan 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
147
2,586
0
12 Dec 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
195
3,705
0
31 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
187
2,341
0
12 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
351
8,000
0
23 May 2016
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
Yani Andrew Ioannou
D. Robertson
R. Cipolla
A. Criminisi
80
265
0
20 May 2016
Deep Networks with Stochastic Depth
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
215
2,361
0
30 Mar 2016
Multi-Scale Context Aggregation by Dilated Convolutions
Multi-Scale Context Aggregation by Dilated Convolutions
Feng Yu
V. Koltun
SSeg
271
8,459
0
23 Nov 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,859
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,700
0
08 Jun 2015
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
251
4,681
0
21 Dec 2014
Do Deep Nets Really Need to be Deep?
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
167
2,119
0
21 Dec 2013
1