ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07488
  4. Cited By
A One-step Pruning-recovery Framework for Acceleration of Convolutional
  Neural Networks

A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks

18 June 2019
Dong Wang
Lei Zhou
Xiao Bai
Jun Zhou
ArXivPDFHTML

Papers citing "A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks"

12 / 12 papers shown
Title
Exploring Linear Relationship in Feature Map Subspace for ConvNets
  Compression
Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression
Dong Wang
Lei Zhou
Xueni Zhang
Xiao Bai
Jun Zhou
34
47
0
15 Mar 2018
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
91
2,407
0
22 Aug 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
186
2,513
0
19 Jul 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
86
2,561
0
12 Dec 2016
Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
Zhe Cao
Tomas Simon
S. Wei
Yaser Sheikh
3DH
128
6,511
0
24 Nov 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
151
3,676
0
31 Aug 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
180
8,793
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
198
6,628
0
08 Jun 2015
Fast ConvNets Using Group-wise Brain Damage
Fast ConvNets Using Group-wise Brain Damage
V. Lebedev
Victor Lempitsky
AAML
96
448
0
08 Jun 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
209
3,862
0
19 Dec 2014
Speeding up Convolutional Neural Networks with Low Rank Expansions
Speeding up Convolutional Neural Networks with Low Rank Expansions
Max Jaderberg
Andrea Vedaldi
Andrew Zisserman
120
1,458
0
15 May 2014
Exploiting Linear Structure Within Convolutional Networks for Efficient
  Evaluation
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Emily L. Denton
Wojciech Zaremba
Joan Bruna
Yann LeCun
Rob Fergus
FAtt
98
1,682
0
02 Apr 2014
1