ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.00518
  4. Cited By
Layer-compensated Pruning for Resource-constrained Convolutional Neural
  Networks

Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks

1 October 2018
Ting-Wu Chin
Cha Zhang
Diana Marculescu
ArXivPDFHTML

Papers citing "Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks"

12 / 12 papers shown
Title
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Danyang Liu
Xue Liu
28
0
0
24 Dec 2022
Entropy Induced Pruning Framework for Convolutional Neural Networks
Entropy Induced Pruning Framework for Convolutional Neural Networks
Yihe Lu
Ziyu Guan
Yaming Yang
Maoguo Gong
Wei Zhao
Kaiyuan Feng
38
2
0
13 Aug 2022
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
6
25
0
12 Mar 2020
PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning
PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning
Dor Livne
Kobi Cohen
31
50
0
14 Jan 2020
AutoML: A Survey of the State-of-the-Art
AutoML: A Survey of the State-of-the-Art
Xin He
Kaiyong Zhao
Xiaowen Chu
25
1,423
0
02 Aug 2019
Parameterized Structured Pruning for Deep Neural Networks
Parameterized Structured Pruning for Deep Neural Networks
Günther Schindler
Wolfgang Roth
Franz Pernkopf
Holger Froening
24
6
0
12 Jun 2019
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural
  Networks
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks
Jiashi Li
Q. Qi
Jingyu Wang
Ce Ge
Yujian Betterest Li
Zhangzhang Yue
Haifeng Sun
BDL
CML
24
53
0
28 May 2019
Dynamic Neural Network Channel Execution for Efficient Training
Dynamic Neural Network Channel Execution for Efficient Training
Simeon E. Spasov
Pietro Lio
19
4
0
15 May 2019
Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge
  Devices
Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices
Xiaofang Xu
M. Park
C. Brick
20
26
0
01 Nov 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
10
1,452
0
11 Oct 2018
A Closer Look at Structured Pruning for Neural Network Compression
A Closer Look at Structured Pruning for Neural Network Compression
Elliot J. Crowley
Jack Turner
Amos Storkey
Michael F. P. O'Boyle
3DPC
29
31
0
10 Oct 2018
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile
  Applications
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Tien-Ju Yang
Andrew G. Howard
Bo Chen
Xiao Zhang
Alec Go
Mark Sandler
Vivienne Sze
Hartwig Adam
90
516
0
09 Apr 2018
1