ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08710
  4. Cited By
Pruning Filters for Efficient ConvNets
v1v2v3 (latest)

Pruning Filters for Efficient ConvNets

31 August 2016
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
    3DPC
ArXiv (abs)PDFHTML

Papers citing "Pruning Filters for Efficient ConvNets"

50 / 1,596 papers shown
Title
Neural networks adapting to datasets: learning network size and topology
Neural networks adapting to datasets: learning network size and topology
R. Janik
A. Nowak
AI4CE
24
0
0
22 Jun 2020
Exploiting Weight Redundancy in CNNs: Beyond Pruning and Quantization
Exploiting Weight Redundancy in CNNs: Beyond Pruning and Quantization
Yuan Wen
David Gregg
MQ
34
3
0
22 Jun 2020
Paying more attention to snapshots of Iterative Pruning: Improving Model
  Compression via Ensemble Distillation
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
54
7
0
20 Jun 2020
Keep Your AI-es on the Road: Tackling Distracted Driver Detection with
  Convolutional Neural Networks and Targeted Data Augmentation
Keep Your AI-es on the Road: Tackling Distracted Driver Detection with Convolutional Neural Networks and Targeted Data Augmentation
Nikka Mofid
Jasmine Bayrooti
S. Ravi
35
6
0
19 Jun 2020
FrostNet: Towards Quantization-Aware Network Architecture Search
FrostNet: Towards Quantization-Aware Network Architecture Search
Taehoon Kim
Y. Yoo
Jihoon Yang
MQ
53
2
0
17 Jun 2020
Exploring Sparsity in Image Super-Resolution for Efficient Inference
Exploring Sparsity in Image Super-Resolution for Efficient Inference
Longguang Wang
Xiaoyu Dong
Yingqian Wang
Xinyi Ying
Zaiping Lin
W. An
Yulan Guo
SupR
59
4
0
17 Jun 2020
Cogradient Descent for Bilinear Optimization
Cogradient Descent for Bilinear Optimization
Lian Zhuo
Baochang Zhang
Linlin Yang
Hanlin Chen
QiXiang Ye
David Doermann
G. Guo
Rongrong Ji
58
14
0
16 Jun 2020
Real-time Universal Style Transfer on High-resolution Images via
  Zero-channel Pruning
Real-time Universal Style Transfer on High-resolution Images via Zero-channel Pruning
Jie An
Tao Li
Haozhi Huang
Li Shen
Xuan Wang
Yongyi Tang
Jinwen Ma
Wei Liu
Jiebo Luo
3DH3DPC
93
18
0
16 Jun 2020
Now that I can see, I can improve: Enabling data-driven finetuning of
  CNNs on the edge
Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge
A. Rajagopal
C. Bouganis
51
5
0
15 Jun 2020
Finding trainable sparse networks through Neural Tangent Transfer
Finding trainable sparse networks through Neural Tangent Transfer
Tianlin Liu
Friedemann Zenke
70
35
0
15 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
64
104
0
14 Jun 2020
Multigrid-in-Channels Architectures for Wide Convolutional Neural Networks
Jonathan Ephrath
Lars Ruthotto
Eran Treister
52
1
0
11 Jun 2020
A Tailored Convolutional Neural Network for Nonlinear Manifold Learning
  of Computational Physics Data using Unstructured Spatial Discretizations
A Tailored Convolutional Neural Network for Nonlinear Manifold Learning of Computational Physics Data using Unstructured Spatial Discretizations
John Tencer
Kevin Potter
AI4CE
61
13
0
11 Jun 2020
Adjoined Networks: A Training Paradigm with Applications to Network
  Compression
Adjoined Networks: A Training Paradigm with Applications to Network Compression
Utkarsh Nath
Shrinu Kushagra
Yingzhen Yang
56
2
0
10 Jun 2020
Condensing Two-stage Detection with Automatic Object Key Part Discovery
Condensing Two-stage Detection with Automatic Object Key Part Discovery
Zhe Chen
Jing Zhang
Dacheng Tao
21
0
0
10 Jun 2020
Deeply Shared Filter Bases for Parameter-Efficient Convolutional Neural
  Networks
Deeply Shared Filter Bases for Parameter-Efficient Convolutional Neural Networks
Woochul Kang
Daeyeon Kim
38
2
0
09 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
33
5
0
08 Jun 2020
EDCompress: Energy-Aware Model Compression for Dataflows
EDCompress: Energy-Aware Model Compression for Dataflows
Zhehui Wang
Yaoyu Zhang
Qiufeng Wang
Rick Siow Mong Goh
52
2
0
08 Jun 2020
Differentiable Neural Input Search for Recommender Systems
Differentiable Neural Input Search for Recommender Systems
Weiyu Cheng
Yanyan Shen
Linpeng Huang
73
36
0
08 Jun 2020
Novel Adaptive Binary Search Strategy-First Hybrid Pyramid- and
  Clustering-Based CNN Filter Pruning Method without Parameters Setting
Novel Adaptive Binary Search Strategy-First Hybrid Pyramid- and Clustering-Based CNN Filter Pruning Method without Parameters Setting
K. Chung
Yu-Lun Chang
Bo-Wei Tsai
27
0
0
08 Jun 2020
EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks
EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks
Hojjat Salehinejad
S. Valaee
57
47
0
07 Jun 2020
MMA Regularization: Decorrelating Weights of Neural Networks by
  Maximizing the Minimal Angles
MMA Regularization: Decorrelating Weights of Neural Networks by Maximizing the Minimal Angles
Zhennan Wang
Canqun Xiang
Wenbin Zou
Chen Xu
100
19
0
06 Jun 2020
Scientific Calculator for Designing Trojan Detectors in Neural Networks
Scientific Calculator for Designing Trojan Detectors in Neural Networks
P. Bajcsy
N. Schaub
Michael Majurski
20
0
0
05 Jun 2020
Accelerating Natural Language Understanding in Task-Oriented Dialog
Accelerating Natural Language Understanding in Task-Oriented Dialog
Ojas Ahuja
Shrey Desai
VLM
20
1
0
05 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
160
100
0
05 Jun 2020
Shapley Value as Principled Metric for Structured Network Pruning
Shapley Value as Principled Metric for Structured Network Pruning
Marco Ancona
Cengiz Öztireli
Markus Gross
64
9
0
02 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
82
38
0
01 Jun 2020
CoDiNet: Path Distribution Modeling with Consistency and Diversity for
  Dynamic Routing
CoDiNet: Path Distribution Modeling with Consistency and Diversity for Dynamic Routing
Huanyu Wang
Zequn Qin
Songyuan Li
Xi Li
51
7
0
29 May 2020
Exploiting Non-Linear Redundancy for Neural Model Compression
Exploiting Non-Linear Redundancy for Neural Model Compression
Muhammad Ahmed Shah
R. Olivier
Bhiksha Raj
20
2
0
28 May 2020
A Feature-map Discriminant Perspective for Pruning Deep Neural Networks
A Feature-map Discriminant Perspective for Pruning Deep Neural Networks
Zejiang Hou
S. Kung
35
5
0
28 May 2020
PruneNet: Channel Pruning via Global Importance
PruneNet: Channel Pruning via Global Importance
A. Khetan
Zohar Karnin
40
11
0
22 May 2020
Position-based Scaled Gradient for Model Quantization and Pruning
Position-based Scaled Gradient for Model Quantization and Pruning
Jangho Kim
Kiyoon Yoo
Nojun Kwak
MQ
38
7
0
22 May 2020
Feature Statistics Guided Efficient Filter Pruning
Feature Statistics Guided Efficient Filter Pruning
Hang Li
Chen Ma
Wenyuan Xu
Xue Liu
49
34
0
21 May 2020
Learning from a Lightweight Teacher for Efficient Knowledge Distillation
Learning from a Lightweight Teacher for Efficient Knowledge Distillation
Yuang Liu
Wei Zhang
Jun Wang
41
3
0
19 May 2020
Joint Multi-Dimension Pruning via Numerical Gradient Update
Joint Multi-Dimension Pruning via Numerical Gradient Update
Zechun Liu
Xinming Zhang
Zhiqiang Shen
Zhe Li
Yichen Wei
Kwang-Ting Cheng
Jian Sun
76
19
0
18 May 2020
Sparse Mixture of Local Experts for Efficient Speech Enhancement
Sparse Mixture of Local Experts for Efficient Speech Enhancement
Aswin Sivaraman
Minje Kim
MoE
59
13
0
16 May 2020
MicroNet for Efficient Language Modeling
MicroNet for Efficient Language Modeling
Zhongxia Yan
Hanrui Wang
Demi Guo
Song Han
62
8
0
16 May 2020
A flexible, extensible software framework for model compression based on
  the LC algorithm
A flexible, extensible software framework for model compression based on the LC algorithm
Yerlan Idelbayev
Miguel Á. Carreira-Perpiñán
17
9
0
15 May 2020
PENNI: Pruned Kernel Sharing for Efficient CNN Inference
PENNI: Pruned Kernel Sharing for Efficient CNN Inference
Shiyu Li
Edward Hanson
H. Li
Yiran Chen
48
19
0
14 May 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
62
121
0
14 May 2020
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural
  Networks
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks
Rohun Tripathi
Bharat Singh
40
6
0
12 May 2020
schuBERT: Optimizing Elements of BERT
schuBERT: Optimizing Elements of BERT
A. Khetan
Zohar Karnin
86
30
0
09 May 2020
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge
  Applications: A Survey
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
3DPCMedIm
71
52
0
08 May 2020
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost
  Computation
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Yang Zhao
Xiaohan Chen
Yue Wang
Chaojian Li
Haoran You
Y. Fu
Yuan Xie
Zhangyang Wang
Yingyan Lin
MQ
107
43
0
07 May 2020
DMCP: Differentiable Markov Channel Pruning for Neural Networks
DMCP: Differentiable Markov Channel Pruning for Neural Networks
Shaopeng Guo
Yujie Wang
Quanquan Li
Junjie Yan
65
166
0
07 May 2020
TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning
TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning
Riyadh Baghdadi
Abdelkader Nadir Debbagh
K. Abdous
Fatima-Zohra Benhamida
Alex Renda
Jonathan Frankle
Michael Carbin
Saman P. Amarasinghe
62
18
0
07 May 2020
Dependency Aware Filter Pruning
Dependency Aware Filter Pruning
Kai Zhao
Xinyu Zhang
Qi Han
Ming-Ming Cheng
28
3
0
06 May 2020
AIBench Scenario: Scenario-distilling AI Benchmarking
AIBench Scenario: Scenario-distilling AI Benchmarking
Wanling Gao
Fei Tang
Jianfeng Zhan
Xu Wen
Lei Wang
Zheng Cao
Chuanxin Lan
Chunjie Luo
Xiaoli Liu
Zihan Jiang
63
14
0
06 May 2020
NTIRE 2020 Challenge on Image and Video Deblurring
NTIRE 2020 Challenge on Image and Video Deblurring
Seungjun Nah
Sanghyun Son
Radu Timofte
Kyoung Mu Lee
107
32
0
04 May 2020
Importance Driven Continual Learning for Segmentation Across Domains
Importance Driven Continual Learning for Segmentation Across Domains
S. Özgün
Anne-Marie Rickmann
Abhijit Guha Roy
Christian Wachinger
OODCLL
57
33
0
30 Apr 2020
Previous
123...212223...303132
Next