ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08710
  4. Cited By
Pruning Filters for Efficient ConvNets

Pruning Filters for Efficient ConvNets

31 August 2016
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
    3DPC
ArXivPDFHTML

Papers citing "Pruning Filters for Efficient ConvNets"

50 / 1,579 papers shown
Title
GeneCAI: Genetic Evolution for Acquiring Compact AI
GeneCAI: Genetic Evolution for Acquiring Compact AI
Mojan Javaheripi
Mohammad Samragh
T. Javidi
F. Koushanfar
45
9
0
08 Apr 2020
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model
  Compression
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao
Yujing Wang
Chufan Wu
Chen Zhang
Yang-Feng Wang
Yaming Yang
Quanlu Zhang
Yunhai Tong
Jing Bai
22
72
0
08 Apr 2020
Teacher-Class Network: A Neural Network Compression Mechanism
Teacher-Class Network: A Neural Network Compression Mechanism
Shaiq Munir Malik
Muhammad Umair Haider
Fnu Mohbat
Musab Rasheed
M. Taj
27
5
0
07 Apr 2020
How Do You Act? An Empirical Study to Understand Behavior of Deep
  Reinforcement Learning Agents
How Do You Act? An Empirical Study to Understand Behavior of Deep Reinforcement Learning Agents
Richard Meyes
Moritz Schneider
Tobias Meisen
28
2
0
07 Apr 2020
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Zhengsu Chen
J. Niu
Lingxi Xie
Xuefeng Liu
Longhui Wei
Qi Tian
27
12
0
06 Apr 2020
A Learning Framework for n-bit Quantized Neural Networks toward FPGAs
A Learning Framework for n-bit Quantized Neural Networks toward FPGAs
Jun Chen
Lu Liu
Yong Liu
Xianfang Zeng
MQ
41
26
0
06 Apr 2020
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity
  Allocation
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation
Xuefei Ning
Tianchen Zhao
Wenshuo Li
Peng Lei
Yu Wang
Huazhong Yang
24
103
0
05 Apr 2020
TimeGate: Conditional Gating of Segments in Long-range Activities
TimeGate: Conditional Gating of Segments in Long-range Activities
Noureldien Hussein
Mihir Jain
B. Bejnordi
AI4TS
18
16
0
03 Apr 2020
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Kaveena Persand
Andrew Anderson
David Gregg
21
2
0
03 Apr 2020
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
30
21
0
02 Apr 2020
Continual Learning with Node-Importance based Adaptive Group Sparse
  Regularization
Continual Learning with Node-Importance based Adaptive Group Sparse Regularization
Sangwon Jung
Hongjoon Ahn
Sungmin Cha
Taesup Moon
CLL
25
120
0
30 Mar 2020
How Not to Give a FLOP: Combining Regularization and Pruning for
  Efficient Inference
How Not to Give a FLOP: Combining Regularization and Pruning for Efficient Inference
Tai Vu
Emily Wen
Roy Nehoran
12
5
0
30 Mar 2020
Rethinking Depthwise Separable Convolutions: How Intra-Kernel
  Correlations Lead to Improved MobileNets
Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets
D. Haase
Manuel Amthor
20
132
0
30 Mar 2020
Nonconvex sparse regularization for deep neural networks and its
  optimality
Nonconvex sparse regularization for deep neural networks and its optimality
Ilsang Ohn
Yongdai Kim
17
19
0
26 Mar 2020
SPFCN: Select and Prune the Fully Convolutional Networks for Real-time
  Parking Slot Detection
SPFCN: Select and Prune the Fully Convolutional Networks for Real-time Parking Slot Detection
Zhuoping Yu
Zhong Gao
Hansheng Chen
Yuyao Huang
30
16
0
25 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
15
92
0
24 Mar 2020
Dynamic Narrowing of VAE Bottlenecks Using GECO and L0 Regularization
Dynamic Narrowing of VAE Bottlenecks Using GECO and L0 Regularization
Cedric De Boom
Samuel T. Wauthier
Tim Verbelen
Bart Dhoedt
DRL
20
6
0
24 Mar 2020
Steepest Descent Neural Architecture Optimization: Escaping Local
  Optimum with Signed Neural Splitting
Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting
Lemeng Wu
Mao Ye
Qi Lei
Jason D. Lee
Qiang Liu
6
14
0
23 Mar 2020
Efficient Crowd Counting via Structured Knowledge Transfer
Efficient Crowd Counting via Structured Knowledge Transfer
Lingbo Liu
Jiaqi Chen
Hefeng Wu
Tianshui Chen
Guanbin Li
Liang Lin
29
64
0
23 Mar 2020
Review of data analysis in vision inspection of power lines with an
  in-depth discussion of deep learning technology
Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology
Xinyu Liu
Xiren Miao
Hao Jiang
Jia Chen
25
11
0
22 Mar 2020
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for
  Network Compression
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
139
189
0
19 Mar 2020
MINT: Deep Network Compression via Mutual Information-based Neuron
  Trimming
MINT: Deep Network Compression via Mutual Information-based Neuron Trimming
Madan Ravi Ganesh
Jason J. Corso
Salimeh Yasaei Sekeh
MQ
48
15
0
18 Mar 2020
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Huan Wang
Yijun Li
Yuehai Wang
Haoji Hu
Ming-Hsuan Yang
117
103
0
18 Mar 2020
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks
  by Weights Flipping
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping
Jiaxiong Qiu
Cai Chen
Shuaicheng Liu
B. Zeng
22
39
0
16 Mar 2020
Resolution Adaptive Networks for Efficient Inference
Resolution Adaptive Networks for Efficient Inference
Le Yang
Yizeng Han
Xi Chen
Shiji Song
Jifeng Dai
Gao Huang
24
215
0
16 Mar 2020
Channel Pruning Guided by Classification Loss and Feature Importance
Channel Pruning Guided by Classification Loss and Feature Importance
Jinyang Guo
Wanli Ouyang
Dong Xu
32
53
0
15 Mar 2020
CoCoPIE: Making Mobile AI Sweet As PIE --Compression-Compilation
  Co-Design Goes a Long Way
CoCoPIE: Making Mobile AI Sweet As PIE --Compression-Compilation Co-Design Goes a Long Way
Shaoshan Liu
Bin Ren
Xipeng Shen
Yanzhi Wang
17
18
0
14 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
6
25
0
12 Mar 2020
Highly Efficient Salient Object Detection with 100K Parameters
Highly Efficient Salient Object Detection with 100K Parameters
Shanghua Gao
Yong-qiang Tan
Ming-Ming Cheng
Chengze Lu
Yunpeng Chen
Shuicheng Yan
231
168
0
12 Mar 2020
Channel Pruning via Optimal Thresholding
Channel Pruning via Optimal Thresholding
Yun Ye
Ganmei You
Jong-Kae Fwu
Xia Zhu
Q. Yang
Yuan Zhu
14
12
0
10 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,032
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
31
14
0
06 Mar 2020
Pruning Filters while Training for Efficiently Optimizing Deep Learning
  Networks
Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks
Sourjya Roy
Priyadarshini Panda
G. Srinivasan
A. Raghunathan
3DPC
VLM
34
19
0
05 Mar 2020
Cluster Pruning: An Efficient Filter Pruning Method for Edge AI Vision
  Applications
Cluster Pruning: An Efficient Filter Pruning Method for Edge AI Vision Applications
Chinthaka Gamanayake
Lahiru Jayasinghe
Benny Kai Kiat Ng
Chau Yuen
VLM
28
45
0
05 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
237
383
0
05 Mar 2020
Privacy-preserving Learning via Deep Net Pruning
Privacy-preserving Learning via Deep Net Pruning
Yangsibo Huang
Yushan Su
S. S. Ravi
Zhao Song
Sanjeev Arora
Keqin Li
MLT
22
16
0
04 Mar 2020
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Mao Ye
Chengyue Gong
Lizhen Nie
Denny Zhou
Adam R. Klivans
Qiang Liu
43
108
0
03 Mar 2020
A Note on Latency Variability of Deep Neural Networks for Mobile
  Inference
A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Luting Yang
Bingqian Lu
Shaolei Ren
19
6
0
29 Feb 2020
Learned Threshold Pruning
Learned Threshold Pruning
K. Azarian
Yash Bhalgat
Jinwon Lee
Tijmen Blankevoort
MQ
28
38
0
28 Feb 2020
HOTCAKE: Higher Order Tucker Articulated Kernels for Deeper CNN
  Compression
HOTCAKE: Higher Order Tucker Articulated Kernels for Deeper CNN Compression
R. Lin
Ching-Yun Ko
Zhuolun He
Cong Chen
Yuan Cheng
Hao Yu
G. Chesi
Ngai Wong
39
6
0
28 Feb 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training
  and Inference of Transformers
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
22
148
0
26 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
22
25
0
24 Feb 2020
HRank: Filter Pruning using High-Rank Feature Map
HRank: Filter Pruning using High-Rank Feature Map
Mingbao Lin
Rongrong Ji
Yan Wang
Yichen Zhang
Baochang Zhang
Yonghong Tian
Ling Shao
13
717
0
24 Feb 2020
Gradual Channel Pruning while Training using Feature Relevance Scores
  for Convolutional Neural Networks
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks
Sai Aparna Aketi
Sourjya Roy
A. Raghunathan
Kaushik Roy
24
22
0
23 Feb 2020
HarDNN: Feature Map Vulnerability Evaluation in CNNs
HarDNN: Feature Map Vulnerability Evaluation in CNNs
Abdulrahman Mahmoud
S. Hari
Christopher W. Fletcher
Sarita Adve
Charbel Sakr
Naresh R Shanbhag
Pavlo Molchanov
Michael B. Sullivan
Timothy Tsai
S. Keckler
27
38
0
22 Feb 2020
Robust Pruning at Initialization
Robust Pruning at Initialization
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
22
46
0
19 Feb 2020
Knapsack Pruning with Inner Distillation
Knapsack Pruning with Inner Distillation
Y. Aflalo
Asaf Noy
Ming Lin
Itamar Friedman
Lihi Zelnik-Manor
3DPC
17
34
0
19 Feb 2020
Structured Sparsification with Joint Optimization of Group Convolution
  and Channel Shuffle
Structured Sparsification with Joint Optimization of Group Convolution and Channel Shuffle
Xinyu Zhang
Kai Zhao
Taihong Xiao
Mingg-Ming Cheng
Ming-Hsuan Yang
28
1
0
19 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR
  Predictions in Ad Serving
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving
Wei Deng
Junwei Pan
Tian Zhou
Deguang Kong
Aaron Flores
Guang Lin
20
4
0
17 Feb 2020
Previous
123...222324...303132
Next