ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07565
  4. Cited By
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain
  Surgeon

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon

22 May 2017
Xin Luna Dong
Shangyu Chen
Sinno Jialin Pan
ArXivPDFHTML

Papers citing "Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon"

40 / 90 papers shown
Title
Group Fisher Pruning for Practical Network Compression
Group Fisher Pruning for Practical Network Compression
Liyang Liu
Shilong Zhang
Zhanghui Kuang
Aojun Zhou
Jingliang Xue
Xinjiang Wang
Yimin Chen
Wenming Yang
Q. Liao
Wayne Zhang
25
146
0
02 Aug 2021
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
Elias Frantar
Eldar Kurtic
Dan Alistarh
18
57
0
07 Jul 2021
Learned Token Pruning for Transformers
Learned Token Pruning for Transformers
Sehoon Kim
Sheng Shen
D. Thorsley
A. Gholami
Woosuk Kwon
Joseph Hassoun
Kurt Keutzer
17
146
0
02 Jul 2021
CompConv: A Compact Convolution Module for Efficient Feature Learning
CompConv: A Compact Convolution Module for Efficient Feature Learning
Chen Zhang
Yinghao Xu
Yujun Shen
VLM
SSL
16
10
0
19 Jun 2021
Efficient Deep Learning: A Survey on Making Deep Learning Models
  Smaller, Faster, and Better
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Gaurav Menghani
VLM
MedIm
23
367
0
16 Jun 2021
Can Subnetwork Structure be the Key to Out-of-Distribution
  Generalization?
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
Dinghuai Zhang
Kartik Ahuja
Yilun Xu
Yisen Wang
Aaron Courville
OOD
27
95
0
05 Jun 2021
1xN Pattern for Pruning Convolutional Neural Networks
1xN Pattern for Pruning Convolutional Neural Networks
Mingbao Lin
Yu-xin Zhang
Yuchao Li
Bohong Chen
Rongrong Ji
Mengdi Wang
Shen Li
Yonghong Tian
Rongrong Ji
3DPC
33
40
0
31 May 2021
Stealthy Backdoors as Compression Artifacts
Stealthy Backdoors as Compression Artifacts
Yulong Tian
Fnu Suya
Fengyuan Xu
David Evans
35
22
0
30 Apr 2021
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
31
25
0
20 Nov 2020
Layer-Wise Data-Free CNN Compression
Layer-Wise Data-Free CNN Compression
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
MQ
24
17
0
18 Nov 2020
Permute, Quantize, and Fine-tune: Efficient Compression of Neural
  Networks
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks
Julieta Martinez
Jashan Shewakramani
Ting Liu
Ioan Andrei Bârsan
Wenyuan Zeng
R. Urtasun
MQ
26
30
0
29 Oct 2020
Layer-adaptive sparsity for the Magnitude-based Pruning
Layer-adaptive sparsity for the Magnitude-based Pruning
Jaeho Lee
Sejun Park
Sangwoo Mo
Sungsoo Ahn
Jinwoo Shin
21
189
0
15 Oct 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
Jason D. Lee
16
85
0
22 Sep 2020
Efficient Transformer-based Large Scale Language Representations using
  Hardware-friendly Block Structured Pruning
Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Bingbing Li
Zhenglun Kong
Tianyun Zhang
Ji Li
Zechao Li
Hang Liu
Caiwen Ding
VLM
32
64
0
17 Sep 2020
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics
Guan Li
Junpeng Wang
Han-Wei Shen
Kaixin Chen
Guihua Shan
Zhonghua Lu
AAML
31
47
0
08 Sep 2020
Training Sparse Neural Networks using Compressed Sensing
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
26
5
0
21 Aug 2020
Embedding Differentiable Sparsity into Deep Neural Network
Embedding Differentiable Sparsity into Deep Neural Network
Yongjin Lee
21
0
0
23 Jun 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
30
3
0
19 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
9
5
0
08 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
64
272
0
03 Feb 2020
Filter Sketch for Network Pruning
Filter Sketch for Network Pruning
Mingbao Lin
Liujuan Cao
Shaojie Li
QiXiang Ye
Yonghong Tian
Jianzhuang Liu
Q. Tian
Rongrong Ji
CLIP
3DPC
31
82
0
23 Jan 2020
DBP: Discrimination Based Block-Level Pruning for Deep Model
  Acceleration
DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration
Wenxiao Wang
Shuai Zhao
Minghao Chen
Jinming Hu
Deng Cai
Haifeng Liu
18
35
0
21 Dec 2019
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks
Xiaohan Ding
Guiguang Ding
Xiangxin Zhou
Yuchen Guo
Jungong Han
Ji Liu
18
162
0
27 Sep 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
334
0
10 Jul 2019
COP: Customized Deep Model Compression via Regularized Correlation-Based
  Filter-Level Pruning
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning
Wenxiao Wang
Cong Fu
Jishun Guo
Deng Cai
Xiaofei He
VLM
24
71
0
25 Jun 2019
Pruning-Aware Merging for Efficient Multitask Inference
Pruning-Aware Merging for Efficient Multitask Inference
Xiaoxi He
Dawei Gao
Zimu Zhou
Yongxin Tong
Lothar Thiele
MoMe
37
8
0
23 May 2019
Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning
  and Quantization Rates using ADMM
Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Shaokai Ye
Xiaoyu Feng
Tianyun Zhang
Xiaolong Ma
Sheng Lin
...
Jian Tang
M. Fardad
X. Lin
Yongpan Liu
Yanzhi Wang
MQ
38
38
0
23 Mar 2019
ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using
  Alternating Direction Method of Multipliers
ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of Multipliers
Ao Ren
Tianyun Zhang
Shaokai Ye
Jiayu Li
Wenyao Xu
Xuehai Qian
X. Lin
Yanzhi Wang
MQ
40
161
0
31 Dec 2018
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete
  Demonstrations of Algorithmic Effectiveness in the Machine Learning and
  Artificial Intelligence Literature
NIPS - Not Even Wrong? A Systematic Review of Empirically Complete Demonstrations of Algorithmic Effectiveness in the Machine Learning and Artificial Intelligence Literature
Franz J. Király
Bilal A. Mateen
R. Sonabend
23
10
0
18 Dec 2018
Efficient Structured Pruning and Architecture Searching for Group
  Convolution
Efficient Structured Pruning and Architecture Searching for Group Convolution
Ruizhe Zhao
Wayne Luk
45
16
0
23 Nov 2018
Fast On-the-fly Retraining-free Sparsification of Convolutional Neural
  Networks
Fast On-the-fly Retraining-free Sparsification of Convolutional Neural Networks
Amir H. Ashouri
T. Abdelrahman
Alwyn Dos Remedios
MQ
16
12
0
10 Nov 2018
Progressive Weight Pruning of Deep Neural Networks using ADMM
Progressive Weight Pruning of Deep Neural Networks using ADMM
Shaokai Ye
Tianyun Zhang
Kaiqi Zhang
Jiayu Li
Kaidi Xu
...
M. Fardad
Sijia Liu
Xiang Chen
X. Lin
Yanzhi Wang
AI4CE
37
38
0
17 Oct 2018
Rate Distortion For Model Compression: From Theory To Practice
Rate Distortion For Model Compression: From Theory To Practice
Weihao Gao
Yu-Han Liu
Chong-Jun Wang
Sewoong Oh
30
31
0
09 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
66
1,176
0
04 Oct 2018
Data-Dependent Coresets for Compressing Neural Networks with
  Applications to Generalization Bounds
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
25
79
0
15 Apr 2018
A Systematic DNN Weight Pruning Framework using Alternating Direction
  Method of Multipliers
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Tianyun Zhang
Shaokai Ye
Kaiqi Zhang
Jian Tang
Wujie Wen
M. Fardad
Yanzhi Wang
30
434
0
10 Apr 2018
Re-Weighted Learning for Sparsifying Deep Neural Networks
Re-Weighted Learning for Sparsifying Deep Neural Networks
Igor Fedorov
Bhaskar D. Rao
24
1
0
05 Feb 2018
Learning Compact Neural Networks with Regularization
Learning Compact Neural Networks with Regularization
Samet Oymak
MLT
41
39
0
05 Feb 2018
Previous
12