ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.02515
  4. Cited By
Fast ConvNets Using Group-wise Brain Damage
v1v2 (latest)

Fast ConvNets Using Group-wise Brain Damage

8 June 2015
V. Lebedev
Victor Lempitsky
    AAML
ArXiv (abs)PDFHTML

Papers citing "Fast ConvNets Using Group-wise Brain Damage"

50 / 211 papers shown
Title
Structured Pruning for Efficient ConvNets via Incremental Regularization
Structured Pruning for Efficient ConvNets via Incremental Regularization
Huan Wang
Qiming Zhang
Yuehai Wang
Haoji Hu
3DPC
105
45
0
20 Nov 2018
Multi-layer Pruning Framework for Compressing Single Shot MultiBox
  Detector
Multi-layer Pruning Framework for Compressing Single Shot MultiBox Detector
Pravendra Singh
Manikandan Ravikiran
Neeraj Matiyali
Vinay P. Namboodiri
69
21
0
20 Nov 2018
Stability Based Filter Pruning for Accelerating Deep CNNs
Stability Based Filter Pruning for Accelerating Deep CNNs
Pravendra Singh
Vinay Sameer Raja Kadi
N. Verma
Vinay P. Namboodiri
CVBM
69
26
0
20 Nov 2018
Three Dimensional Convolutional Neural Network Pruning with
  Regularization-Based Method
Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method
Yu-xin Zhang
Huan Wang
Yang Luo
Lu Yu
Roland Hu
Hangguan Shan
Tony Q. S. Quek
3DPC
53
11
0
19 Nov 2018
A First Look at Deep Learning Apps on Smartphones
A First Look at Deep Learning Apps on Smartphones
Mengwei Xu
Jiawei Liu
Yuanqiang Liu
F. Lin
Yunxin Liu
Xuanzhe Liu
HAI
91
183
0
08 Nov 2018
CNN inference acceleration using dictionary of centroids
CNN inference acceleration using dictionary of centroids
D.Babin
I.Mazurenko
D.Parkhomenko
A.Voloshko
MQ
24
0
0
19 Oct 2018
Efficient architecture for deep neural networks with heterogeneous
  sensitivity
Efficient architecture for deep neural networks with heterogeneous sensitivity
Hyunjoong Cho
Jinhyeok Jang
Chanhyeok Lee
Seungjoon Yang
37
0
0
12 Oct 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
44
1,480
0
11 Oct 2018
Deep Asymmetric Networks with a Set of Node-wise Variant Activation
  Functions
Deep Asymmetric Networks with a Set of Node-wise Variant Activation Functions
Jinhyeok Jang
Hyunjoong Cho
Jaehong Kim
Jaeyeon Lee
Seungjoon Yang
20
2
0
11 Sep 2018
Learning Sparse Low-Precision Neural Networks With Learnable
  Regularization
Learning Sparse Low-Precision Neural Networks With Learnable Regularization
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
66
31
0
01 Sep 2018
Predefined Sparseness in Recurrent Sequence Models
Predefined Sparseness in Recurrent Sequence Models
T. Demeester
Johannes Deleu
Fréderic Godin
Chris Develder
25
3
0
27 Aug 2018
Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis
  and its Generalization Error
Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error
Taiji Suzuki
Hiroshi Abe
Tomoya Murata
Shingo Horiuchi
Kotaro Ito
Tokuma Wachi
So Hirai
Masatoshi Yukishima
Tomoaki Nishimura
MLT
62
10
0
26 Aug 2018
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
Yang He
Xuanyi Dong
Guoliang Kang
Yanwei Fu
C. Yan
Yi Yang
118
135
0
22 Aug 2018
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
Yang He
Guoliang Kang
Xuanyi Dong
Yanwei Fu
Yi Yang
AAMLVLM
98
966
0
21 Aug 2018
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep
  Neural Networks
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Dongqing Zhang
Jiaolong Yang
Dongqiangzi Ye
G. Hua
MQ
74
703
0
26 Jul 2018
Impostor Networks for Fast Fine-Grained Recognition
Impostor Networks for Fast Fine-Grained Recognition
V. Lebedev
Artem Babenko
Victor Lempitsky
26
3
0
13 Jun 2018
EasyConvPooling: Random Pooling with Easy Convolution for Accelerating
  Training and Testing
EasyConvPooling: Random Pooling with Easy Convolution for Accelerating Training and Testing
Jianzhong Sheng
Chuanbo Chen
Chenchen Fu
Chun Jason Xue
82
5
0
05 Jun 2018
Targeted Kernel Networks: Faster Convolutions with Attentive
  Regularization
Targeted Kernel Networks: Faster Convolutions with Attentive Regularization
Kashyap Chitta
24
2
0
01 Jun 2018
AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient
  Deep Model Inference
AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference
Jian-Hao Luo
Jianxin Wu
75
210
0
23 May 2018
Compression of Deep Convolutional Neural Networks under Joint Sparsity
  Constraints
Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
40
6
0
21 May 2018
Recurrent knowledge distillation
Recurrent knowledge distillation
S. Pintea
Yue Liu
Jan van Gemert
ODL
21
2
0
18 May 2018
Low-memory convolutional neural networks through incremental depth-first
  processing
Low-memory convolutional neural networks through incremental depth-first processing
Jonathan Binas
Yoshua Bengio
SupR
39
3
0
28 Apr 2018
Accelerator-Aware Pruning for Convolutional Neural Networks
Accelerator-Aware Pruning for Convolutional Neural Networks
Hyeong-Ju Kang
93
90
0
26 Apr 2018
Data-Dependent Coresets for Compressing Neural Networks with
  Applications to Generalization Bounds
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
95
79
0
15 Apr 2018
Efficient Hardware Realization of Convolutional Neural Networks using
  Intra-Kernel Regular Pruning
Efficient Hardware Realization of Convolutional Neural Networks using Intra-Kernel Regular Pruning
Maurice Yang
Mahmoud Faraj
Assem Hussein
V. Gaudet
CVBM
64
12
0
15 Mar 2018
Exploring Linear Relationship in Feature Map Subspace for ConvNets
  Compression
Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression
Dong Wang
Lei Zhou
Xueni Zhang
Xiao Bai
Jun Zhou
75
47
0
15 Mar 2018
Paraphrasing Complex Network: Network Compression via Factor Transfer
Paraphrasing Complex Network: Network Compression via Factor Transfer
Jangho Kim
Seonguk Park
Nojun Kwak
95
551
0
14 Feb 2018
From Hashing to CNNs: Training BinaryWeight Networks via Hashing
From Hashing to CNNs: Training BinaryWeight Networks via Hashing
Qinghao Hu
Peisong Wang
Jian Cheng
MQ
88
98
0
08 Feb 2018
Universal Deep Neural Network Compression
Universal Deep Neural Network Compression
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
151
88
0
07 Feb 2018
Recent Advances in Efficient Computation of Deep Convolutional Neural
  Networks
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks
Jian Cheng
Peisong Wang
Gang Li
Qinghao Hu
Hanqing Lu
49
3
0
03 Feb 2018
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel
  Pruning of Convolution Layers
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers
Jianbo Ye
Xin Lu
Zhe Lin
Jianmin Wang
102
408
0
01 Feb 2018
Deep Net Triage: Analyzing the Importance of Network Layers via
  Structural Compression
Deep Net Triage: Analyzing the Importance of Network Layers via Structural Compression
Theodore S. Nowak
Jason J. Corso
FAtt
40
3
0
15 Jan 2018
StrassenNets: Deep Learning with a Multiplication Budget
StrassenNets: Deep Learning with a Multiplication Budget
Michael Tschannen
Aran Khanna
Anima Anandkumar
52
30
0
11 Dec 2017
WSNet: Compact and Efficient Networks Through Weight Sampling
WSNet: Compact and Efficient Networks Through Weight Sampling
Xiaojie Jin
Yingzhen Yang
N. Xu
Jianchao Yang
Nebojsa Jojic
Jiashi Feng
Shuicheng Yan
49
2
0
28 Nov 2017
Deep Expander Networks: Efficient Deep Networks from Graph Theory
Deep Expander Networks: Efficient Deep Networks from Graph Theory
Ameya Prabhu
G. Varma
A. Namboodiri
GNN
130
72
0
23 Nov 2017
MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep
  Networks
MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks
A. Gordon
Elad Eban
Ofir Nachum
Bo Chen
Hao Wu
Tien-Ju Yang
Edward Choi
87
339
0
18 Nov 2017
Towards Effective Low-bitwidth Convolutional Neural Networks
Towards Effective Low-bitwidth Convolutional Neural Networks
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
98
233
0
01 Nov 2017
A Survey of Model Compression and Acceleration for Deep Neural Networks
A Survey of Model Compression and Acceleration for Deep Neural Networks
Yu Cheng
Duo Wang
Pan Zhou
Zhang Tao
150
1,101
0
23 Oct 2017
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
204
1,285
0
05 Oct 2017
Learning Intrinsic Sparse Structures within Long Short-Term Memory
Learning Intrinsic Sparse Structures within Long Short-Term Memory
W. Wen
Yuxiong He
Samyam Rajbhandari
Minjia Zhang
Wenhan Wang
Fang Liu
Bin Hu
Yiran Chen
H. Li
MQ
135
142
0
15 Sep 2017
The Role of Minimal Complexity Functions in Unsupervised Learning of
  Semantic Mappings
The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings
Tomer Galanti
Lior Wolf
Sagie Benaim
91
25
0
31 Aug 2017
Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional
  Network with Bayesian Optimization
Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization
Frederick Tung
S. Muralidharan
Greg Mori
72
36
0
28 Jul 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
63
1,763
0
20 Jul 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
226
2,538
0
19 Jul 2017
Scalable Training of Artificial Neural Networks with Adaptive Sparse
  Connectivity inspired by Network Science
Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science
Decebal Constantin Mocanu
Elena Mocanu
Peter Stone
Phuong H. Nguyen
M. Gibescu
A. Liotta
192
641
0
15 Jul 2017
Exploring the Regularity of Sparse Structure in Convolutional Neural
  Networks
Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Huizi Mao
Song Han
Jeff Pool
Wenshuo Li
Xingyu Liu
Yu Wang
W. Dally
133
244
0
24 May 2017
Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Kirill Neklyudov
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
149
189
0
20 May 2017
Coordinating Filters for Faster Deep Neural Networks
Coordinating Filters for Faster Deep Neural Networks
W. Wen
Cong Xu
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
64
138
0
28 Mar 2017
More is Less: A More Complicated Network with Less Inference Complexity
More is Less: A More Complicated Network with Less Inference Complexity
Xuanyi Dong
Junshi Huang
Yi Yang
Shuicheng Yan
85
288
0
25 Mar 2017
Enabling Sparse Winograd Convolution by Native Pruning
Enabling Sparse Winograd Convolution by Native Pruning
Sheng Li
Jongsoo Park
P. T. P. Tang
63
51
0
28 Feb 2017
Previous
12345
Next