Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.00104
Cited By
Post-Training Piecewise Linear Quantization for Deep Neural Networks
31 January 2020
Jun Fang
Ali Shafiee
Hamzah Abdel-Aziz
D. Thorsley
Georgios Georgiadis
Joseph Hassoun
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Post-Training Piecewise Linear Quantization for Deep Neural Networks"
27 / 27 papers shown
Title
On the Impact of White-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh
Bram Adams
Ahmed E. Hassan
VLM
43
0
0
01 Nov 2024
Quantizing YOLOv7: A Comprehensive Study
Mohammadamin Baghbanbashi
Mohsen Raji
B. Ghavami
MQ
32
8
0
06 Jul 2024
Instance-Aware Group Quantization for Vision Transformers
Jaehyeon Moon
Dohyung Kim
Junyong Cheon
Bumsub Ham
MQ
ViT
29
7
0
01 Apr 2024
On the Impact of Black-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh
Emad Fallahzadeh
Bram Adams
Ahmed E. Hassan
MQ
40
3
0
25 Mar 2024
Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs in Resource-Constrained Edge Environment
Atah Nuh Mih
Alireza Rahimi
Asfia Kawnine
Francis Palma
Monica Wachowicz
R. Dubay
Hung Cao
23
0
0
14 Mar 2024
Exploring Post-Training Quantization of Protein Language Models
Shuang Peng
Fei Yang
Ning Sun
Sheng Chen
Yanfeng Jiang
Aimin Pan
MQ
27
0
0
30 Oct 2023
Digital Modeling on Large Kernel Metamaterial Neural Network
Quan Liu
Hanyu Zheng
Brandon T. Swartz
Ho Hin Lee
Zuhayr Asad
I. Kravchenko
Jason G Valentine
Yuankai Huo
20
4
0
21 Jul 2023
Q-YOLO: Efficient Inference for Real-time Object Detection
Mingze Wang
H. Sun
Jun Shi
Xuhui Liu
Baochang Zhang
Xianbin Cao
ObjD
42
8
0
01 Jul 2023
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Mingliang Xu
Yuyao Zhou
Rongrong Ji
Rongrong Ji
MQ
31
1
0
14 May 2023
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Sheng Xu
Yanjing Li
Mingbao Lin
Penglei Gao
Guodong Guo
Jinhu Lu
Baochang Zhang
MQ
29
23
0
01 Apr 2023
Towards Accurate Post-Training Quantization for Vision Transformer
Yifu Ding
Haotong Qin
Qing-Yu Yan
Z. Chai
Junjie Liu
Xiaolin K. Wei
Xianglong Liu
MQ
54
68
0
25 Mar 2023
PD-Quant: Post-Training Quantization based on Prediction Difference Metric
Jiawei Liu
Lin Niu
Zhihang Yuan
Dawei Yang
Xinggang Wang
Wenyu Liu
MQ
96
68
0
14 Dec 2022
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Hai Wu
Ruifei He
Hao Hao Tan
Xiaojuan Qi
Kaibin Huang
MQ
24
2
0
10 Dec 2022
Energy awareness in low precision neural networks
Nurit Spingarn-Eliezer
Ron Banner
Elad Hoffer
Hilla Ben-Yaacov
T. Michaeli
38
0
0
06 Feb 2022
Post-training Quantization for Neural Networks with Provable Guarantees
Jinjie Zhang
Yixuan Zhou
Rayan Saab
MQ
23
32
0
26 Jan 2022
IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization
Mingliang Xu
Mingbao Lin
Gongrui Nan
Jianzhuang Liu
Baochang Zhang
Yonghong Tian
Rongrong Ji
MQ
46
71
0
17 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
27
1
0
01 Nov 2021
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
M. Lyu
MQ
79
47
0
30 Sep 2021
Fine-grained Data Distribution Alignment for Post-Training Quantization
Mingliang Xu
Mingbao Lin
Yonghong Tian
Ke Li
Yunhang Shen
Rongrong Ji
Yongjian Wu
Rongrong Ji
MQ
84
19
0
09 Sep 2021
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Bo-wen Li
Xinyang Jiang
Donglin Bai
Yuge Zhang
Ningxin Zheng
Xuanyi Dong
Lu Liu
Yuqing Yang
Dongsheng Li
14
10
0
30 Aug 2021
MOHAQ: Multi-Objective Hardware-Aware Quantization of Recurrent Neural Networks
Nesma M. Rezk
Tomas Nordstrom
D. Stathis
Z. Ul-Abdin
E. Aksoy
A. Hemani
MQ
20
1
0
02 Aug 2021
Post-Training Sparsity-Aware Quantization
Gil Shomron
F. Gabbay
Samer Kurzum
U. Weiser
MQ
39
33
0
23 May 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
27
67
0
08 Feb 2021
FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4bit-Compact Multilayer Perceptrons
Simon Wiedemann
Suhas Shivapakash
P. Wiedemann
Daniel Becking
Wojciech Samek
F. Gerfers
Thomas Wiegand
MQ
23
7
0
17 Dec 2020
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
Ali Hadi Zadeh
Isak Edo
Omar Mohamed Awad
Andreas Moshovos
MQ
30
183
0
08 May 2020
Loss Aware Post-training Quantization
Yury Nahshan
Brian Chmiel
Chaim Baskin
Evgenii Zheltonozhskii
Ron Banner
A. Bronstein
A. Mendelson
MQ
28
163
0
17 Nov 2019
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
1