ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08710
  4. Cited By
Pruning Filters for Efficient ConvNets
v1v2v3 (latest)

Pruning Filters for Efficient ConvNets

31 August 2016
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
    3DPC
ArXiv (abs)PDFHTML

Papers citing "Pruning Filters for Efficient ConvNets"

50 / 1,596 papers shown
Title
ZeroQuant: Efficient and Affordable Post-Training Quantization for
  Large-Scale Transformers
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLMMQ
174
484
0
04 Jun 2022
Dynamic Kernel Selection for Improved Generalization and Memory
  Efficiency in Meta-learning
Dynamic Kernel Selection for Improved Generalization and Memory Efficiency in Meta-learning
Arnav Chavan
Rishabh Tiwari
Udbhav Bamba
D. K. Gupta
81
5
0
03 Jun 2022
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Y. Fu
Haichuan Yang
Jiayi Yuan
Meng Li
Cheng Wan
Raghuraman Krishnamoorthi
Vikas Chandra
Yingyan Lin
130
19
0
02 Jun 2022
ORC: Network Group-based Knowledge Distillation using Online Role Change
ORC: Network Group-based Knowledge Distillation using Online Role Change
Jun-woo Choi
Hyeon Cho
Seockhwa Jeong
Wonjun Hwang
29
3
0
01 Jun 2022
Gator: Customizable Channel Pruning of Neural Networks with Gating
Gator: Customizable Channel Pruning of Neural Networks with Gating
E. Passov
E. David
N. Netanyahu
AAML
51
0
0
30 May 2022
MiniDisc: Minimal Distillation Schedule for Language Model Compression
MiniDisc: Minimal Distillation Schedule for Language Model Compression
Chen Zhang
Yang Yang
Qifan Wang
Jiahao Liu
Jingang Wang
Wei Wu
Dawei Song
79
4
0
29 May 2022
FCN-Pose: A Pruned and Quantized CNN for Robot Pose Estimation for
  Constrained Devices
FCN-Pose: A Pruned and Quantized CNN for Robot Pose Estimation for Constrained Devices
M. Dantas
I. R. R. Silva
A. T. O. Filho
Gibson B. N. Barbosa
Daniel Bezerra
D. Sadok
J. Kelner
M. Marquezini
Ricardo F. D. Silva
49
1
0
26 May 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
95
11
0
24 May 2022
The Importance of Being Parameters: An Intra-Distillation Method for
  Serious Gains
The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains
Haoran Xu
Philipp Koehn
Kenton W. Murray
MoMe
40
5
0
23 May 2022
Energy-efficient Deployment of Deep Learning Applications on Cortex-M
  based Microcontrollers using Deep Compression
Energy-efficient Deployment of Deep Learning Applications on Cortex-M based Microcontrollers using Deep Compression
M. Deutel
Philipp Woller
Christopher Mutschler
Jürgen Teich
113
4
0
20 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
DD
98
21
0
17 May 2022
Residual Local Feature Network for Efficient Super-Resolution
Residual Local Feature Network for Efficient Super-Resolution
Fang Kong
Mingxi Li
Songwei Liu
Ding Liu
Jingwen He
Yang Bai
Fangmin Chen
Lean Fu
SupR
75
159
0
16 May 2022
A Comprehensive Survey on Model Quantization for Deep Neural Networks in
  Image Classification
A Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification
Babak Rokh
A. Azarpeyvand
Alireza Khanteymoori
MQ
127
103
0
14 May 2022
Target Aware Network Architecture Search and Compression for Efficient
  Knowledge Transfer
Target Aware Network Architecture Search and Compression for Efficient Knowledge Transfer
S. H. Shabbeer Basha
Debapriya Tula
Sravan Kumar Vinakota
S. Dubey
49
3
0
12 May 2022
Revisiting Random Channel Pruning for Neural Network Compression
Revisiting Random Channel Pruning for Neural Network Compression
Yawei Li
Kamil Adamczewski
Wen Li
Shuhang Gu
Radu Timofte
Luc Van Gool
110
86
0
11 May 2022
Robust Learning of Parsimonious Deep Neural Networks
Robust Learning of Parsimonious Deep Neural Networks
Valentin Frank Ingmar Guenter
Athanasios Sideris
64
2
0
10 May 2022
Task-specific Compression for Multi-task Language Models using
  Attribution-based Pruning
Task-specific Compression for Multi-task Language Models using Attribution-based Pruning
Nakyeong Yang
Yunah Jang
Hwanhee Lee
Seohyeong Jung
Kyomin Jung
28
8
0
09 May 2022
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms
  and Research Challenges
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges
Zhenghua Chen
Min-man Wu
Alvin Chan
Xiaoli Li
Yew-Soon Ong
51
7
0
08 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCVMLT
77
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
115
18
0
04 May 2022
Domino Saliency Metrics: Improving Existing Channel Saliency Metrics
  with Structural Information
Domino Saliency Metrics: Improving Existing Channel Saliency Metrics with Structural Information
Kaveena Persand
Andrew Anderson
David Gregg
27
0
0
04 May 2022
Compact Neural Networks via Stacking Designed Basic Units
Compact Neural Networks via Stacking Designed Basic Units
Weichao Lan
Y. Cheung
Juyong Jiang
58
0
0
03 May 2022
Triangular Dropout: Variable Network Width without Retraining
Triangular Dropout: Variable Network Width without Retraining
Edward W. Staley
Jared Markowitz
55
2
0
02 May 2022
Cracking White-box DNN Watermarks via Invariant Neuron Transforms
Cracking White-box DNN Watermarks via Invariant Neuron Transforms
Yifan Yan
Xudong Pan
Yining Wang
Mi Zhang
Min Yang
AAML
46
14
0
30 Apr 2022
Federated Progressive Sparsification (Purge, Merge, Tune)+
Federated Progressive Sparsification (Purge, Merge, Tune)+
Dimitris Stripelis
Umang Gupta
Greg Ver Steeg
J. Ambite
FedML
62
11
0
26 Apr 2022
Attentive Fine-Grained Structured Sparsity for Image Restoration
Attentive Fine-Grained Structured Sparsity for Image Restoration
Junghun Oh
Heewon Kim
Seungjun Nah
Chee Hong
Jonghyun Choi
Kyoung Mu Lee
132
20
0
26 Apr 2022
Enable Deep Learning on Mobile Devices: Methods, Systems, and
  Applications
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications
Han Cai
Ji Lin
Chengyue Wu
Zhijian Liu
Haotian Tang
Hanrui Wang
Ligeng Zhu
Song Han
116
115
0
25 Apr 2022
Boosting Pruned Networks with Linear Over-parameterization
Boosting Pruned Networks with Linear Over-parameterization
Yundi Qian
Siyuan Pan
Xiaoshuang Li
Jie Zhang
Liang Hou
Xiaobing Tu
41
2
0
25 Apr 2022
Dynamic Network Adaptation at Inference
Dynamic Network Adaptation at Inference
Daniel Mendoza
Caroline Trippel
57
0
0
18 Apr 2022
End-to-End Sensitivity-Based Filter Pruning
End-to-End Sensitivity-Based Filter Pruning
Z. Babaiee
Lucas Liebenwein
Ramin Hasani
Daniela Rus
Radu Grosu
AAML
59
1
0
15 Apr 2022
Q-TART: Quickly Training for Adversarial Robustness and
  in-Transferability
Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Madan Ravi Ganesh
Salimeh Yasaei Sekeh
Jason J. Corso
AAML
33
1
0
14 Apr 2022
HASA: Hybrid Architecture Search with Aggregation Strategy for
  Echinococcosis Classification and Ovary Segmentation in Ultrasound Images
HASA: Hybrid Architecture Search with Aggregation Strategy for Echinococcosis Classification and Ovary Segmentation in Ultrasound Images
Jikuan Qian
Rui Li
Xin Yang
Yuhao Huang
Mingyuan Luo
...
Wenhui Hong
Ruobing Huang
China
Dong Ni
Xining
51
10
0
14 Apr 2022
Receding Neuron Importances for Structured Pruning
Receding Neuron Importances for Structured Pruning
Mihai Suteu
Yike Guo
47
1
0
13 Apr 2022
HuBERT-EE: Early Exiting HuBERT for Efficient Speech Recognition
HuBERT-EE: Early Exiting HuBERT for Efficient Speech Recognition
J. Yoon
Beom Jun Woo
N. Kim
66
13
0
13 Apr 2022
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance
  Users
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance Users
Emon Dey
Nirmalya Roy
25
6
0
13 Apr 2022
Neural Network Pruning by Cooperative Coevolution
Neural Network Pruning by Cooperative Coevolution
Haopu Shang
Jia-Liang Wu
Wenjing Hong
Chaojun Qian
VLM
63
23
0
12 Apr 2022
Compact Model Training by Low-Rank Projection with Energy Transfer
Compact Model Training by Low-Rank Projection with Energy Transfer
K. Guo
Zhenquan Lin
Xiaofen Xing
Fang Liu
Xiangmin Xu
73
2
0
12 Apr 2022
E^2TAD: An Energy-Efficient Tracking-based Action Detector
E^2TAD: An Energy-Efficient Tracking-based Action Detector
Xin Hu
Zhenyu Wu
Haoyuan Miao
Siqi Fan
Taiyu Long
...
Pengcheng Pi
Yi Wu
Zhou Ren
Zhangyang Wang
G. Hua
85
2
0
09 Apr 2022
Deep neural network goes lighter: A case study of deep compression
  techniques on automatic RF modulation recognition for Beyond 5G networks
Deep neural network goes lighter: A case study of deep compression techniques on automatic RF modulation recognition for Beyond 5G networks
Anu Jagannath
Jithin Jagannath
Yanzhi Wang
Tommaso Melodia
68
3
0
09 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
98
11
0
06 Apr 2022
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
SD-Conv: Towards the Parameter-Efficiency of Dynamic Convolution
Shwai He
Chenbo Jiang
Daize Dong
Liang Ding
72
5
0
05 Apr 2022
Supervised Robustness-preserving Data-free Neural Network Pruning
Supervised Robustness-preserving Data-free Neural Network Pruning
Mark Huasong Meng
Guangdong Bai
Sin Gee Teo
Jin Song Dong
AAML
96
4
0
02 Apr 2022
Monarch: Expressive Structured Matrices for Efficient and Accurate
  Training
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
141
97
0
01 Apr 2022
CHEX: CHannel EXploration for CNN Model Compression
CHEX: CHannel EXploration for CNN Model Compression
Zejiang Hou
Minghai Qin
Fei Sun
Xiaolong Ma
Kun Yuan
Yi Xu
Yen-kuang Chen
Rong Jin
Yuan Xie
S. Kung
82
74
0
29 Mar 2022
A Passive Similarity based CNN Filter Pruning for Efficient Acoustic
  Scene Classification
A Passive Similarity based CNN Filter Pruning for Efficient Acoustic Scene Classification
Arshdeep Singh
Mark D. Plumbley
3DPC
58
14
0
29 Mar 2022
CNN Filter DB: An Empirical Investigation of Trained Convolutional
  Filters
CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters
Paul Gavrikov
J. Keuper
AAML
105
31
0
29 Mar 2022
Enhancing Transformer Efficiency for Multivariate Time Series
  Classification
Enhancing Transformer Efficiency for Multivariate Time Series Classification
Yuqing Wang
Yun Zhao
Linda R. Petzold
AI4TS
55
2
0
28 Mar 2022
Searching for Network Width with Bilaterally Coupled Network
Searching for Network Width with Bilaterally Coupled Network
Xiu Su
Shan You
Jiyang Xie
Fei Wang
Chao Qian
Changshui Zhang
Chang Xu
75
7
0
25 Mar 2022
Lightweight Graph Convolutional Networks with Topologically Consistent
  Magnitude Pruning
Lightweight Graph Convolutional Networks with Topologically Consistent Magnitude Pruning
H. Sahbi
GNN
49
1
0
25 Mar 2022
Deformable Butterfly: A Highly Structured and Sparse Linear Transform
Deformable Butterfly: A Highly Structured and Sparse Linear Transform
R. Lin
Jie Ran
King Hung Chiu
Grazinao Chesi
Ngai Wong
46
15
0
25 Mar 2022
Previous
123...111213...303132
Next