ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08710
  4. Cited By
Pruning Filters for Efficient ConvNets

Pruning Filters for Efficient ConvNets

31 August 2016
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
    3DPC
ArXivPDFHTML

Papers citing "Pruning Filters for Efficient ConvNets"

50 / 1,579 papers shown
Title
Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation
  System with Non-Stationary Data
Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data
Mao Ye
Dhruv Choudhary
Jiecao Yu
Ellie Wen
Zeliang Chen
Jiyan Yang
Jongsoo Park
Qiang Liu
A. Kejariwal
29
9
0
16 Oct 2020
Towards Optimal Filter Pruning with Balanced Performance and Pruning
  Speed
Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed
Dong Li
Sitong Chen
Xudong Liu
Yunda Sun
Li Lyna Zhang
VLM
26
4
0
14 Oct 2020
Coarse and fine-grained automatic cropping deep convolutional neural
  network
Coarse and fine-grained automatic cropping deep convolutional neural network
Jingfei Chang
27
0
0
13 Oct 2020
Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in
  Image Classification
Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification
Yulin Wang
Kangchen Lv
Rui Huang
Shiji Song
Le Yang
Gao Huang
3DH
16
148
0
11 Oct 2020
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework
Wenxiao Wang
Minghao Chen
Shuai Zhao
Long Chen
Jinming Hu
Haifeng Liu
Deng Cai
Xiaofei He
Wei Liu
35
58
0
10 Oct 2020
Training Binary Neural Networks through Learning with Noisy Supervision
Training Binary Neural Networks through Learning with Noisy Supervision
Kai Han
Yunhe Wang
Yixing Xu
Chunjing Xu
Enhua Wu
Chang Xu
MQ
15
55
0
10 Oct 2020
Be Your Own Best Competitor! Multi-Branched Adversarial Knowledge
  Transfer
Be Your Own Best Competitor! Multi-Branched Adversarial Knowledge Transfer
Mahdi Ghorbani
Fahimeh Fooladgar
S. Kasaei
AAML
39
0
0
09 Oct 2020
Comprehensive Online Network Pruning via Learnable Scaling Factors
Comprehensive Online Network Pruning via Learnable Scaling Factors
Muhammad Umair Haider
M. Taj
29
7
0
06 Oct 2020
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit
  Neural Network Inference
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong
Yigitcan Kaya
Ionut-Vlad Modoranu
Tudor Dumitras
AAML
17
69
0
06 Oct 2020
Winning Lottery Tickets in Deep Generative Models
Winning Lottery Tickets in Deep Generative Models
Neha Kalibhat
Yogesh Balaji
S. Feizi
WIGM
31
42
0
05 Oct 2020
Are Neural Nets Modular? Inspecting Functional Modularity Through
  Differentiable Weight Masks
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
53
88
0
05 Oct 2020
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Po-Hsiang Yu
Sih-Sian Wu
Jan P. Klopp
Liang-Gee Chen
Shao-Yi Chien
MQ
35
14
0
05 Oct 2020
UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks
  Compression and Acceleration
UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks Compression and Acceleration
Jingfei Chang
Yang Lu
Ping Xue
Xing Wei
Zhen Wei
22
2
0
03 Oct 2020
Improving Network Slimming with Nonconvex Regularization
Improving Network Slimming with Nonconvex Regularization
Kevin Bui
Fredrick Park
Shuai Zhang
Y. Qi
Jack Xin
21
9
0
03 Oct 2020
Pruning Filter in Filter
Pruning Filter in Filter
Fanxu Meng
Hao Cheng
Ke Li
Huixiang Luo
Xiao-Wei Guo
Guangming Lu
Xing Sun
VLM
32
104
0
30 Sep 2020
Self-grouping Convolutional Neural Networks
Self-grouping Convolutional Neural Networks
Qingbei Guo
Xiaojun Wu
J. Kittler
Zhiquan Feng
25
22
0
29 Sep 2020
Grow-Push-Prune: aligning deep discriminants for effective structural
  network compression
Grow-Push-Prune: aligning deep discriminants for effective structural network compression
Qing Tian
Tal Arbel
James J. Clark
22
8
0
29 Sep 2020
A Gradient Flow Framework For Analyzing Network Pruning
A Gradient Flow Framework For Analyzing Network Pruning
Ekdeep Singh Lubana
Robert P. Dick
34
52
0
24 Sep 2020
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network
  Training
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Dingqing Yang
Amin Ghasemazar
X. Ren
Maximilian Golub
G. Lemieux
Mieszko Lis
22
48
0
23 Sep 2020
Pruning Convolutional Filters using Batch Bridgeout
Pruning Convolutional Filters using Batch Bridgeout
Najeeb Khan
Ian Stavness
28
3
0
23 Sep 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
Jason D. Lee
21
85
0
22 Sep 2020
PP-OCR: A Practical Ultra Lightweight OCR System
PP-OCR: A Practical Ultra Lightweight OCR System
Yuning Du
Chenxia Li
Ruoyu Guo
Xiaoting Yin
Weiwei Liu
...
Yifan Bai
Zilin Yu
Yehua Yang
Qingqing Dang
Hongya Wang
36
178
0
21 Sep 2020
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
30
238
0
18 Sep 2020
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Zhiqiang Shen
Marios Savvides
33
63
0
17 Sep 2020
Holistic Filter Pruning for Efficient Deep Neural Networks
Holistic Filter Pruning for Efficient Deep Neural Networks
Lukas Enderich
Fabian Timm
Wolfram Burgard
32
7
0
17 Sep 2020
A Progressive Sub-Network Searching Framework for Dynamic Inference
A Progressive Sub-Network Searching Framework for Dynamic Inference
Li Yang
Zhezhi He
Yu Cao
Deliang Fan
AI4CE
20
6
0
11 Sep 2020
Achieving Adversarial Robustness via Sparsity
Achieving Adversarial Robustness via Sparsity
Shu-Fan Wang
Ningyi Liao
Liyao Xiang
Nanyang Ye
Quanshi Zhang
AAML
22
15
0
11 Sep 2020
An Efficient Quantitative Approach for Optimizing Convolutional Neural
  Networks
An Efficient Quantitative Approach for Optimizing Convolutional Neural Networks
Yuke Wang
Boyuan Feng
Xueqiao Peng
Yufei Ding
3DV
24
1
0
11 Sep 2020
Extending Label Smoothing Regularization with Self-Knowledge
  Distillation
Extending Label Smoothing Regularization with Self-Knowledge Distillation
Jiyue Wang
Pei Zhang
Wenjie Pang
Jie Li
14
0
0
11 Sep 2020
Understanding the Role of Individual Units in a Deep Neural Network
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
25
437
0
10 Sep 2020
OrthoReg: Robust Network Pruning Using Orthonormality Regularization
OrthoReg: Robust Network Pruning Using Orthonormality Regularization
Ekdeep Singh Lubana
Puja Trivedi
C. Hougen
Robert P. Dick
Alfred Hero
37
1
0
10 Sep 2020
Prune Responsibly
Prune Responsibly
Michela Paganini
VLM
27
21
0
10 Sep 2020
On the Orthogonality of Knowledge Distillation with Other Techniques:
  From an Ensemble Perspective
On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective
Seonguk Park
Kiyoon Yoo
Nojun Kwak
FedML
26
3
0
09 Sep 2020
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
Guan Li
Junpeng Wang
Han-Wei Shen
Kaixin Chen
Guihua Shan
Zhonghua Lu
AAML
31
47
0
08 Sep 2020
FlipOut: Uncovering Redundant Weights via Sign Flipping
FlipOut: Uncovering Redundant Weights via Sign Flipping
A. Apostol
M. Stol
Patrick Forré
UQCV
12
1
0
05 Sep 2020
ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution
ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution
Ze Wang
Xiuyuan Cheng
Guillermo Sapiro
Qiang Qiu
14
9
0
04 Sep 2020
S3NAS: Fast NPU-aware Neural Architecture Search Methodology
S3NAS: Fast NPU-aware Neural Architecture Search Methodology
Jaeseong Lee
Duseok Kang
S. Ha
35
10
0
04 Sep 2020
It's Hard for Neural Networks To Learn the Game of Life
It's Hard for Neural Networks To Learn the Game of Life
Jacob Mitchell Springer
Garrett Kenyon
27
21
0
03 Sep 2020
Transform Quantization for CNN (Convolutional Neural Network)
  Compression
Transform Quantization for CNN (Convolutional Neural Network) Compression
Sean I. Young
Wang Zhe
David S. Taubman
B. Girod
MQ
36
69
0
02 Sep 2020
Efficient and Sparse Neural Networks by Pruning Weights in a
  Multiobjective Learning Approach
Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach
Malena Reiners
K. Klamroth
Michael Stiglmayr
22
17
0
31 Aug 2020
HALO: Learning to Prune Neural Networks with Shrinkage
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
24
0
0
24 Aug 2020
One Weight Bitwidth to Rule Them All
One Weight Bitwidth to Rule Them All
Ting-Wu Chin
P. Chuang
Vikas Chandra
Diana Marculescu
MQ
28
25
0
22 Aug 2020
Training Sparse Neural Networks using Compressed Sensing
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
31
5
0
21 Aug 2020
Utilizing Explainable AI for Quantization and Pruning of Deep Neural
  Networks
Utilizing Explainable AI for Quantization and Pruning of Deep Neural Networks
Muhammad Sabih
Frank Hannig
J. Teich
MQ
14
22
0
20 Aug 2020
Data-Independent Structured Pruning of Neural Networks via Coresets
Data-Independent Structured Pruning of Neural Networks via Coresets
Ben Mussay
Dan Feldman
Samson Zhou
Vladimir Braverman
Margarita Osadchy
26
25
0
19 Aug 2020
Cascaded channel pruning using hierarchical self-distillation
Cascaded channel pruning using hierarchical self-distillation
Roy Miles
K. Mikolajczyk
24
7
0
16 Aug 2020
AntiDote: Attention-based Dynamic Optimization for Neural Network
  Runtime Efficiency
AntiDote: Attention-based Dynamic Optimization for Neural Network Runtime Efficiency
Fuxun Yu
Chenchen Liu
Di Wang
Yanzhi Wang
Xiang Chen
16
7
0
14 Aug 2020
FATNN: Fast and Accurate Ternary Neural Networks
FATNN: Fast and Accurate Ternary Neural Networks
Peng Chen
Bohan Zhuang
Chunhua Shen
MQ
6
15
0
12 Aug 2020
RARTS: An Efficient First-Order Relaxed Architecture Search Method
RARTS: An Efficient First-Order Relaxed Architecture Search Method
Fanghui Xue
Y. Qi
Jack Xin
27
1
0
10 Aug 2020
Structured Convolutions for Efficient Neural Network Design
Structured Convolutions for Efficient Neural Network Design
Yash Bhalgat
Yizhe Zhang
J. Lin
Fatih Porikli
16
8
0
06 Aug 2020
Previous
123...192021...303132
Next