ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.11444
  4. Cited By
Boosting Pruned Networks with Linear Over-parameterization
v1v2v3 (latest)

Boosting Pruned Networks with Linear Over-parameterization

25 April 2022
Yundi Qian
Siyuan Pan
Xiaoshuang Li
Jie Zhang
Liang Hou
Xiaobing Tu
ArXiv (abs)PDFHTML

Papers citing "Boosting Pruned Networks with Linear Over-parameterization"

20 / 20 papers shown
Title
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
210
700
0
24 Jan 2021
RepVGG: Making VGG-style ConvNets Great Again
RepVGG: Making VGG-style ConvNets Great Again
Xiaohan Ding
Xinming Zhang
Ningning Ma
Jungong Han
Guiguang Ding
Jian Sun
289
1,602
0
11 Jan 2021
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework
Wenxiao Wang
Minghao Chen
Shuai Zhao
Long Chen
Jinming Hu
Haifeng Liu
Deng Cai
Xiaofei He
Wei Liu
83
60
0
10 Oct 2020
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
Xiaohan Ding
Tianxiang Hao
Jianchao Tan
Ji Liu
Jungong Han
Yuchen Guo
Guiguang Ding
72
166
0
07 Jul 2020
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
Bailin Li
Bowen Wu
Jiang Su
Guangrun Wang
Liang Lin
88
175
0
06 Jul 2020
DO-Conv: Depthwise Over-parameterized Convolutional Layer
DO-Conv: Depthwise Over-parameterized Convolutional Layer
Jinming Cao
Yangyan Li
Mingchao Sun
Ying-Cong Chen
Dani Lischinski
Daniel Cohen-Or
Baoquan Chen
Changhe Tu
OOD
75
173
0
22 Jun 2020
HRank: Filter Pruning using High-Rank Feature Map
HRank: Filter Pruning using High-Rank Feature Map
Mingbao Lin
Rongrong Ji
Yan Wang
Yichen Zhang
Baochang Zhang
Yonghong Tian
Ling Shao
79
728
0
24 Feb 2020
SpArch: Efficient Architecture for Sparse Matrix Multiplication
SpArch: Efficient Architecture for Sparse Matrix Multiplication
Zhekai Zhang
Hanrui Wang
Song Han
W. Dally
64
233
0
20 Feb 2020
ACNet: Strengthening the Kernel Skeletons for Powerful CNN via
  Asymmetric Convolution Blocks
ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks
Xiaohan Ding
Yuchen Guo
Guiguang Ding
Jiawei Han
72
676
0
11 Aug 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
126
981
0
23 Jul 2019
Learning to Design Circuits
Learning to Design Circuits
Hanrui Wang
Jiacheng Yang
Hae-Seung Lee
Song Han
75
93
0
05 Dec 2018
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
  Design
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
Ningning Ma
Xiangyu Zhang
Haitao Zheng
Jian Sun
181
5,012
0
30 Jul 2018
A Systematic DNN Weight Pruning Framework using Alternating Direction
  Method of Multipliers
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Tianyun Zhang
Shaokai Ye
Kaiqi Zhang
Jian Tang
Wujie Wen
M. Fardad
Yanzhi Wang
69
438
0
10 Apr 2018
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li Li
Song Han
100
1,349
0
10 Feb 2018
NISP: Pruning Networks using Neuron Importance Score Propagation
NISP: Pruning Networks using Neuron Importance Score Propagation
Ruichi Yu
Ang Li
Chun-Fu Chen
Jui-Hsin Lai
Vlad I. Morariu
Xintong Han
M. Gao
Ching-Yung Lin
L. Davis
74
800
0
16 Nov 2017
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
197
1,281
0
05 Oct 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
58
1,761
0
20 Jul 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
206
2,529
0
19 Jul 2017
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain
  Surgeon
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
Xin Luna Dong
Shangyu Chen
Sinno Jialin Pan
183
506
0
22 May 2017
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
195
3,705
0
31 Aug 2016
1