ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.15023
  4. Cited By
Post-training deep neural network pruning via layer-wise calibration

Post-training deep neural network pruning via layer-wise calibration

30 April 2021
Ivan Lazarevich
Alexander Kozlov
Nikita Malinin
    3DPC
ArXiv (abs)PDFHTML

Papers citing "Post-training deep neural network pruning via layer-wise calibration"

18 / 18 papers shown
Title
Can Vision Transformers Learn without Natural Images?
Can Vision Transformers Learn without Natural Images?
Kodai Nakashima
Hirokatsu Kataoka
Asato Matsumoto
K. Iwata
Nakamasa Inoue
ViT
54
34
0
24 Mar 2021
BRECQ: Pushing the Limit of Post-Training Quantization by Block
  Reconstruction
BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction
Yuhang Li
Ruihao Gong
Xu Tan
Yang Yang
Peng Hu
Qi Zhang
F. Yu
Wei Wang
Shi Gu
MQ
153
444
0
10 Feb 2021
Pre-training without Natural Images
Pre-training without Natural Images
Hirokatsu Kataoka
Kazushige Okayasu
Asato Matsumoto
Eisuke Yamagata
Ryosuke Yamada
Nakamasa Inoue
Akio Nakamura
Y. Satoh
135
120
0
21 Jan 2021
Layer-Wise Data-Free CNN Compression
Layer-Wise Data-Free CNN Compression
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
MQ
54
17
0
18 Nov 2020
Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise
  Sparsity
Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise Sparsity
Cong Guo
B. Hsueh
Jingwen Leng
Yuxian Qiu
Yue Guan
Zehuan Wang
Xiaoying Jia
Xipeng Li
Minyi Guo
Yuhao Zhu
71
83
0
29 Aug 2020
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
Bailin Li
Bowen Wu
Jiang Su
Guangrun Wang
Liang Lin
98
175
0
06 Jul 2020
Improving Post Training Neural Quantization: Layer-wise Calibration and
  Integer Programming
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Itay Hubara
Yury Nahshan
Y. Hanani
Ron Banner
Daniel Soudry
MQ
109
128
0
14 Jun 2020
Up or Down? Adaptive Rounding for Post-Training Quantization
Up or Down? Adaptive Rounding for Post-Training Quantization
Markus Nagel
Rana Ali Amjad
M. V. Baalen
Christos Louizos
Tijmen Blankevoort
MQ
95
588
0
22 Apr 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
280
1,054
0
06 Mar 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable Sparsity
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
155
247
0
08 Feb 2020
Fast Sparse ConvNets
Fast Sparse ConvNets
Erich Elsen
Marat Dukhan
Trevor Gale
Karen Simonyan
174
153
0
21 Nov 2019
Lookahead Optimizer: k steps forward, 1 step back
Lookahead Optimizer: k steps forward, 1 step back
Michael Ruogu Zhang
James Lucas
Geoffrey E. Hinton
Jimmy Ba
ODL
159
733
0
19 Jul 2019
Data-Free Quantization Through Weight Equalization and Bias Correction
Data-Free Quantization Through Weight Equalization and Bias Correction
Markus Nagel
M. V. Baalen
Tijmen Blankevoort
Max Welling
MQ
75
515
0
11 Jun 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
165
763
0
25 Feb 2019
Averaging Weights Leads to Wider Optima and Better Generalization
Averaging Weights Leads to Wider Optima and Better Generalization
Pavel Izmailov
Dmitrii Podoprikhin
T. Garipov
Dmitry Vetrov
A. Wilson
FedMLMoMe
143
1,673
0
14 Mar 2018
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li Li
Song Han
109
1,349
0
10 Feb 2018
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
202
1,282
0
05 Oct 2017
Data-free parameter pruning for Deep Neural Networks
Data-free parameter pruning for Deep Neural Networks
Suraj Srinivas
R. Venkatesh Babu
3DPC
87
547
0
22 Jul 2015
1