ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.08589
  4. Cited By
Graph-Adaptive Pruning for Efficient Inference of Convolutional Neural
  Networks

Graph-Adaptive Pruning for Efficient Inference of Convolutional Neural Networks

21 November 2018
Mengdi Wang
Qing Zhang
Jun Yang
Xiaoyuan Cui
Wei Lin
    GNN
ArXiv (abs)PDFHTML

Papers citing "Graph-Adaptive Pruning for Efficient Inference of Convolutional Neural Networks"

12 / 12 papers shown
Title
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
127
2,426
0
22 Aug 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
58
1,761
0
20 Jul 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
206
2,531
0
19 Jul 2017
Exploring the Regularity of Sparse Structure in Convolutional Neural
  Networks
Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Huizi Mao
Song Han
Jeff Pool
Wenshuo Li
Xingyu Liu
Yu Wang
W. Dally
111
244
0
24 May 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
522
10,351
0
16 Nov 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
189
2,341
0
12 Aug 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
159
7,501
0
24 Feb 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,862
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
316
6,709
0
08 Jun 2015
Speeding-up Convolutional Neural Networks Using Fine-tuned
  CP-Decomposition
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition
V. Lebedev
Yaroslav Ganin
M. Rakhuba
Ivan Oseledets
Victor Lempitsky
72
886
0
19 Dec 2014
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
322
3,899
0
19 Dec 2014
Compressing Deep Convolutional Networks using Vector Quantization
Compressing Deep Convolutional Networks using Vector Quantization
Yunchao Gong
Liu Liu
Ming Yang
Lubomir D. Bourdev
MQ
179
1,171
0
18 Dec 2014
1