ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.07802
  4. Cited By
Value-aware Quantization for Training and Inference of Neural Networks

Value-aware Quantization for Training and Inference of Neural Networks

20 April 2018
Eunhyeok Park
S. Yoo
Peter Vajda
    MQ
ArXivPDFHTML

Papers citing "Value-aware Quantization for Training and Inference of Neural Networks"

27 / 27 papers shown
Title
Data-free Weight Compress and Denoise for Large Language Models
Data-free Weight Compress and Denoise for Large Language Models
Runyu Peng
Yunhua Zhou
Qipeng Guo
Yang Gao
Hang Yan
Xipeng Qiu
Dahua Lin
39
1
0
26 Feb 2024
eDKM: An Efficient and Accurate Train-time Weight Clustering for Large
  Language Models
eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Minsik Cho
Keivan Alizadeh Vahid
Qichen Fu
Saurabh N. Adya
C. C. D. Mundo
Mohammad Rastegari
Devang Naik
Peter Zatloukal
MQ
26
6
0
02 Sep 2023
Efficient and Effective Methods for Mixed Precision Neural Network
  Quantization for Faster, Energy-efficient Inference
Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Deepika Bablani
J. McKinstry
S. K. Esser
R. Appuswamy
D. Modha
MQ
23
4
0
30 Jan 2023
CSMPQ:Class Separability Based Mixed-Precision Quantization
CSMPQ:Class Separability Based Mixed-Precision Quantization
Ming-Yu Wang
Taisong Jin
Miaohui Zhang
Zhengtao Yu
MQ
31
0
0
20 Dec 2022
PD-Quant: Post-Training Quantization based on Prediction Difference
  Metric
PD-Quant: Post-Training Quantization based on Prediction Difference Metric
Jiawei Liu
Lin Niu
Zhihang Yuan
Dawei Yang
Xinggang Wang
Wenyu Liu
MQ
96
68
0
14 Dec 2022
Vertical Layering of Quantized Neural Networks for Heterogeneous
  Inference
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Hai Wu
Ruifei He
Hao Hao Tan
Xiaojuan Qi
Kaibin Huang
MQ
24
2
0
10 Dec 2022
Analysis of Quantization on MLP-based Vision Models
Analysis of Quantization on MLP-based Vision Models
Lingran Zhao
Zhen Dong
Kurt Keutzer
MQ
32
7
0
14 Sep 2022
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
25
11
0
11 Aug 2022
Symmetry Regularization and Saturating Nonlinearity for Robust
  Quantization
Symmetry Regularization and Saturating Nonlinearity for Robust Quantization
Sein Park
Yeongsang Jang
Eunhyeok Park
MQ
21
1
0
31 Jul 2022
Quantization of Generative Adversarial Networks for Efficient Inference:
  a Methodological Study
Quantization of Generative Adversarial Networks for Efficient Inference: a Methodological Study
Pavel Andreev
Alexander Fritzler
Dmitry Vetrov
MQ
19
10
0
31 Aug 2021
Compact representations of convolutional neural networks via weight
  pruning and quantization
Compact representations of convolutional neural networks via weight pruning and quantization
Giosuè Cataldo Marinò
A. Petrini
D. Malchiodi
Marco Frasca
MQ
21
4
0
28 Aug 2021
DKM: Differentiable K-Means Clustering Layer for Neural Network
  Compression
DKM: Differentiable K-Means Clustering Layer for Neural Network Compression
Minsik Cho
Keivan Alizadeh Vahid
Saurabh N. Adya
Mohammad Rastegari
34
34
0
28 Aug 2021
Post-Training Sparsity-Aware Quantization
Post-Training Sparsity-Aware Quantization
Gil Shomron
F. Gabbay
Samer Kurzum
U. Weiser
MQ
39
33
0
23 May 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAML
MQ
24
18
0
16 Apr 2021
Diversifying Sample Generation for Accurate Data-Free Quantization
Diversifying Sample Generation for Accurate Data-Free Quantization
Xiangguo Zhang
Haotong Qin
Yifu Ding
Ruihao Gong
Qing Yan
Renshuai Tao
Yuhang Li
F. Yu
Xianglong Liu
MQ
56
94
0
01 Mar 2021
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network
  Quantization
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization
Huanrui Yang
Lin Duan
Yiran Chen
Hai Helen Li
MQ
15
64
0
20 Feb 2021
I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
105
341
0
05 Jan 2021
Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
H. T. Kung
Bradley McDanel
S. Zhang
MQ
21
9
0
13 Jul 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy
  Efficient Inference
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
Ali Hadi Zadeh
Isak Edo
Omar Mohamed Awad
Andreas Moshovos
MQ
30
183
0
08 May 2020
Least squares binary quantization of neural networks
Least squares binary quantization of neural networks
Hadi Pouransari
Zhucheng Tu
Oncel Tuzel
MQ
17
32
0
09 Jan 2020
ZeroQ: A Novel Zero Shot Quantization Framework
ZeroQ: A Novel Zero Shot Quantization Framework
Yaohui Cai
Z. Yao
Zhen Dong
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
35
389
0
01 Jan 2020
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
Zhen Dong
Z. Yao
Yaohui Cai
Daiyaan Arfeen
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
31
274
0
10 Nov 2019
Efficient 8-Bit Quantization of Transformer Neural Machine Language
  Translation Model
Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model
Aishwarya Bhandare
Vamsi Sripathi
Deepthi Karkada
Vivek V. Menon
Sun Choi
Kushal Datta
V. Saletore
MQ
24
129
0
03 Jun 2019
Improving Neural Network Quantization without Retraining using Outlier
  Channel Splitting
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
Ritchie Zhao
Yuwei Hu
Jordan Dotzel
Christopher De Sa
Zhiru Zhang
OODD
MQ
38
305
0
28 Jan 2019
Post-training 4-bit quantization of convolution networks for
  rapid-deployment
Post-training 4-bit quantization of convolution networks for rapid-deployment
Ron Banner
Yury Nahshan
Elad Hoffer
Daniel Soudry
MQ
19
93
0
02 Oct 2018
UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural
  Networks
UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks
Chaim Baskin
Eli Schwartz
Evgenii Zheltonozhskii
Natan Liss
Raja Giryes
A. Bronstein
A. Mendelson
MQ
19
45
0
29 Apr 2018
1