ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08710
  4. Cited By
Pruning Filters for Efficient ConvNets
v1v2v3 (latest)

Pruning Filters for Efficient ConvNets

31 August 2016
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
    3DPC
ArXiv (abs)PDFHTML

Papers citing "Pruning Filters for Efficient ConvNets"

50 / 1,596 papers shown
Title
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
152
191
0
30 Apr 2020
Pruning artificial neural networks: a way to find well-generalizing,
  high-entropy sharp minima
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
66
12
0
30 Apr 2020
TRP: Trained Rank Pruning for Efficient Deep Neural Networks
TRP: Trained Rank Pruning for Efficient Deep Neural Networks
Yuhui Xu
Yuxi Li
Shuai Zhang
W. Wen
Botao Wang
Y. Qi
Yiran Chen
Weiyao Lin
H. Xiong
AAML
78
71
0
30 Apr 2020
Rethinking Class-Discrimination Based CNN Channel Pruning
Rethinking Class-Discrimination Based CNN Channel Pruning
Yuchen Liu
D. Wentzlaff
S. Kung
51
10
0
29 Apr 2020
WoodFisher: Efficient Second-Order Approximation for Neural Network
  Compression
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression
Sidak Pal Singh
Dan Alistarh
70
28
0
29 Apr 2020
Do We Need Fully Connected Output Layers in Convolutional Networks?
Do We Need Fully Connected Output Layers in Convolutional Networks?
Zhongchao Qian
Tyler L. Hayes
Kushal Kafle
Christopher Kanan
49
9
0
28 Apr 2020
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN
  Model Training
FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training
Sangkug Lym
M. Erez
36
26
0
27 Apr 2020
Filter Grafting for Deep Neural Networks: Reason, Method, and
  Cultivation
Filter Grafting for Deep Neural Networks: Reason, Method, and Cultivation
Hao Cheng
Fanxu Meng
Ke Li
Huixiang Luo
Guangming Lu
Xing Sun
Feiyue Huang
20
0
0
26 Apr 2020
Convolution-Weight-Distribution Assumption: Rethinking the Criteria of
  Channel Pruning
Convolution-Weight-Distribution Assumption: Rethinking the Criteria of Channel Pruning
Zhongzhan Huang
Wenqi Shao
Xinjiang Wang
Liang Lin
Ping Luo
75
56
0
24 Apr 2020
SIPA: A Simple Framework for Efficient Networks
SIPA: A Simple Framework for Efficient Networks
Gihun Lee
Sangmin Bae
Jaehoon Oh
Seyoung Yun
19
1
0
24 Apr 2020
Automatic low-bit hybrid quantization of neural networks through meta
  learning
Automatic low-bit hybrid quantization of neural networks through meta learning
Tao Wang
Junsong Wang
Chang Xu
Chao Xue
MQ
23
2
0
24 Apr 2020
Intermittent Inference with Nonuniformly Compressed Multi-Exit Neural
  Network for Energy Harvesting Powered Devices
Intermittent Inference with Nonuniformly Compressed Multi-Exit Neural Network for Energy Harvesting Powered Devices
Yawen Wu
Zhepeng Wang
Zhenge Jia
Yiyu Shi
Jiaxi Hu
86
54
0
23 Apr 2020
MGX: Near-Zero Overhead Memory Protection for Data-Intensive
  Accelerators
MGX: Near-Zero Overhead Memory Protection for Data-Intensive Accelerators
Weizhe Hua
M. Umar
Zhiru Zhang
G. E. Suh
GNN
100
21
0
20 Apr 2020
Efficient Synthesis of Compact Deep Neural Networks
Efficient Synthesis of Compact Deep Neural Networks
Wenhan Xia
Hongxu Yin
N. Jha
57
3
0
18 Apr 2020
Non-Blocking Simultaneous Multithreading: Embracing the Resiliency of
  Deep Neural Networks
Non-Blocking Simultaneous Multithreading: Embracing the Resiliency of Deep Neural Networks
Gil Shomron
U. Weiser
47
15
0
17 Apr 2020
Training with Quantization Noise for Extreme Model Compression
Training with Quantization Noise for Extreme Model Compression
Angela Fan
Pierre Stock
Benjamin Graham
Edouard Grave
Remi Gribonval
Hervé Jégou
Armand Joulin
MQ
111
246
0
15 Apr 2020
A Unified DNN Weight Compression Framework Using Reweighted Optimization
  Methods
A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Tianyun Zhang
Xiaolong Ma
Zheng Zhan
Shangli Zhou
Minghai Qin
Fei Sun
Yen-kuang Chen
Caiwen Ding
M. Fardad
Yanzhi Wang
40
5
0
12 Apr 2020
GeneCAI: Genetic Evolution for Acquiring Compact AI
GeneCAI: Genetic Evolution for Acquiring Compact AI
Mojan Javaheripi
Mohammad Samragh
T. Javidi
F. Koushanfar
74
9
0
08 Apr 2020
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model
  Compression
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao
Yujing Wang
Chufan Wu
Chen Zhang
Yang-Feng Wang
Yaming Yang
Quanlu Zhang
Yunhai Tong
Jing Bai
60
74
0
08 Apr 2020
Teacher-Class Network: A Neural Network Compression Mechanism
Teacher-Class Network: A Neural Network Compression Mechanism
Shaiq Munir Malik
Muhammad Umair Haider
Fnu Mohbat
Musab Rasheed
M. Taj
85
5
0
07 Apr 2020
How Do You Act? An Empirical Study to Understand Behavior of Deep
  Reinforcement Learning Agents
How Do You Act? An Empirical Study to Understand Behavior of Deep Reinforcement Learning Agents
Richard Meyes
Moritz Schneider
Tobias Meisen
55
2
0
07 Apr 2020
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Zhengsu Chen
J. Niu
Lingxi Xie
Xuefeng Liu
Longhui Wei
Qi Tian
54
12
0
06 Apr 2020
A Learning Framework for n-bit Quantized Neural Networks toward FPGAs
A Learning Framework for n-bit Quantized Neural Networks toward FPGAs
Jun Chen
Lu Liu
Yong Liu
Xianfang Zeng
MQ
91
28
0
06 Apr 2020
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity
  Allocation
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation
Xuefei Ning
Tianchen Zhao
Wenshuo Li
Peng Lei
Yu Wang
Huazhong Yang
90
104
0
05 Apr 2020
TimeGate: Conditional Gating of Segments in Long-range Activities
TimeGate: Conditional Gating of Segments in Long-range Activities
Noureldien Hussein
Mihir Jain
B. Bejnordi
AI4TS
108
16
0
03 Apr 2020
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Kaveena Persand
Andrew Anderson
David Gregg
26
2
0
03 Apr 2020
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
78
22
0
02 Apr 2020
Continual Learning with Node-Importance based Adaptive Group Sparse
  Regularization
Continual Learning with Node-Importance based Adaptive Group Sparse Regularization
Sangwon Jung
Hongjoon Ahn
Sungmin Cha
Taesup Moon
CLL
88
128
0
30 Mar 2020
How Not to Give a FLOP: Combining Regularization and Pruning for
  Efficient Inference
How Not to Give a FLOP: Combining Regularization and Pruning for Efficient Inference
Tai Vu
Emily Wen
Roy Nehoran
19
5
0
30 Mar 2020
Rethinking Depthwise Separable Convolutions: How Intra-Kernel
  Correlations Lead to Improved MobileNets
Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets
D. Haase
Manuel Amthor
64
136
0
30 Mar 2020
Nonconvex sparse regularization for deep neural networks and its
  optimality
Nonconvex sparse regularization for deep neural networks and its optimality
Ilsang Ohn
Yongdai Kim
71
19
0
26 Mar 2020
SPFCN: Select and Prune the Fully Convolutional Networks for Real-time
  Parking Slot Detection
SPFCN: Select and Prune the Fully Convolutional Networks for Real-time Parking Slot Detection
Zhuoping Yu
Zhong Gao
Hansheng Chen
Yuyao Huang
58
16
0
25 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
53
95
0
24 Mar 2020
Dynamic Narrowing of VAE Bottlenecks Using GECO and L0 Regularization
Dynamic Narrowing of VAE Bottlenecks Using GECO and L0 Regularization
Cedric De Boom
Samuel T. Wauthier
Tim Verbelen
Bart Dhoedt
DRL
35
6
0
24 Mar 2020
Steepest Descent Neural Architecture Optimization: Escaping Local
  Optimum with Signed Neural Splitting
Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting
Lemeng Wu
Mao Ye
Qi Lei
Jason D. Lee
Qiang Liu
88
15
0
23 Mar 2020
Efficient Crowd Counting via Structured Knowledge Transfer
Efficient Crowd Counting via Structured Knowledge Transfer
Lingbo Liu
Jiaqi Chen
Hefeng Wu
Tianshui Chen
Guanbin Li
Liang Lin
98
65
0
23 Mar 2020
Review of data analysis in vision inspection of power lines with an
  in-depth discussion of deep learning technology
Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology
Xinyu Liu
Xiren Miao
Hao Jiang
Jia Chen
55
12
0
22 Mar 2020
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for
  Network Compression
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
225
192
0
19 Mar 2020
MINT: Deep Network Compression via Mutual Information-based Neuron
  Trimming
MINT: Deep Network Compression via Mutual Information-based Neuron Trimming
Madan Ravi Ganesh
Jason J. Corso
Salimeh Yasaei Sekeh
MQ
104
16
0
18 Mar 2020
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Huan Wang
Yijun Li
Yuehai Wang
Haoji Hu
Ming-Hsuan Yang
172
103
0
18 Mar 2020
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks
  by Weights Flipping
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping
Jiaxiong Qiu
Cai Chen
Shuaicheng Liu
B. Zeng
113
41
0
16 Mar 2020
Resolution Adaptive Networks for Efficient Inference
Resolution Adaptive Networks for Efficient Inference
Le Yang
Yizeng Han
Xi Chen
Shiji Song
Jifeng Dai
Gao Huang
106
219
0
16 Mar 2020
Channel Pruning Guided by Classification Loss and Feature Importance
Channel Pruning Guided by Classification Loss and Feature Importance
Jinyang Guo
Wanli Ouyang
Dong Xu
64
54
0
15 Mar 2020
CoCoPIE: Making Mobile AI Sweet As PIE --Compression-Compilation
  Co-Design Goes a Long Way
CoCoPIE: Making Mobile AI Sweet As PIE --Compression-Compilation Co-Design Goes a Long Way
Shaoshan Liu
Bin Ren
Xipeng Shen
Yanzhi Wang
69
18
0
14 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
78
25
0
12 Mar 2020
Highly Efficient Salient Object Detection with 100K Parameters
Highly Efficient Salient Object Detection with 100K Parameters
Shanghua Gao
Yong-qiang Tan
Ming-Ming Cheng
Chengze Lu
Yunpeng Chen
Shuicheng Yan
306
172
0
12 Mar 2020
Channel Pruning via Optimal Thresholding
Channel Pruning via Optimal Thresholding
Yun Ye
Ganmei You
Jong-Kae Fwu
Xia Zhu
Q. Yang
Yuan Zhu
47
12
0
10 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
293
1,058
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
142
14
0
06 Mar 2020
Pruning Filters while Training for Efficiently Optimizing Deep Learning
  Networks
Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks
Sourjya Roy
Priyadarshini Panda
G. Srinivasan
A. Raghunathan
3DPCVLM
70
19
0
05 Mar 2020
Previous
123...222324...303132
Next