Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.01917
Cited By
Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization
5 February 2019
Eldad Meller
Alexander Finkelstein
Uri Almog
Mark Grobman
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization"
21 / 21 papers shown
Title
On the Impact of White-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh
Bram Adams
Ahmed E. Hassan
VLM
43
0
0
01 Nov 2024
Accumulator-Aware Post-Training Quantization
Ian Colbert
Fabian Grob
Giuseppe Franco
Jinjie Zhang
Rayan Saab
MQ
30
4
0
25 Sep 2024
On the Impact of Black-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh
Emad Fallahzadeh
Bram Adams
Ahmed E. Hassan
MQ
40
3
0
25 Mar 2024
Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs
Shivam Aggarwal
Hans Jakob Damsgaard
Alessandro Pappalardo
Giuseppe Franco
Thomas B. Preußer
Michaela Blott
Tulika Mitra
MQ
24
5
0
21 Nov 2023
Quantization Aware Factorization for Deep Neural Network Compression
Daria Cherniuk
Stanislav Abukhovich
Anh-Huy Phan
Ivan Oseledets
A. Cichocki
Julia Gusak
MQ
23
2
0
08 Aug 2023
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
Yelysei Bondarenko
Markus Nagel
Tijmen Blankevoort
MQ
21
87
0
22 Jun 2023
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
29
6
0
05 Dec 2022
MinUn: Accurate ML Inference on Microcontrollers
Shikhar Jaiswal
R. Goli
Aayan Kumar
Vivek Seshadri
Rahul Sharma
26
2
0
29 Oct 2022
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation
Ahmad Shawahna
S. M. Sait
A. El-Maleh
Irfan Ahmad
MQ
20
6
0
22 Mar 2022
Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Jemin Lee
Misun Yu
Yongin Kwon
Teaho Kim
MQ
19
17
0
10 Feb 2022
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
28
15
0
30 Sep 2021
An Embedding of ReLU Networks and an Analysis of their Identifiability
Pierre Stock
Rémi Gribonval
31
17
0
20 Jul 2021
A White Paper on Neural Network Quantization
Markus Nagel
Marios Fournarakis
Rana Ali Amjad
Yelysei Bondarenko
M. V. Baalen
Tijmen Blankevoort
MQ
33
504
0
15 Jun 2021
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of Quantization on Depthwise Separable Convolutional Networks Through the Eyes of Multi-scale Distributional Dynamics
S. Yun
Alexander Wong
MQ
19
25
0
24 Apr 2021
Automated Backend-Aware Post-Training Quantization
Ziheng Jiang
Animesh Jain
An Liu
Josh Fromm
Chengqian Ma
Tianqi Chen
Luis Ceze
MQ
37
2
0
27 Mar 2021
Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
C. Coelho
Aki Kuusela
Shane Li
Zhuang Hao
T. Aarrestad
Vladimir Loncar
J. Ngadiuba
M. Pierini
Adrian Alan Pol
S. Summers
MQ
32
175
0
15 Jun 2020
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Itay Hubara
Yury Nahshan
Y. Hanani
Ron Banner
Daniel Soudry
MQ
35
122
0
14 Jun 2020
Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision
Xingchao Liu
Mao Ye
Dengyong Zhou
Qiang Liu
MQ
13
42
0
20 Feb 2020
ZeroQ: A Novel Zero Shot Quantization Framework
Yaohui Cai
Z. Yao
Zhen Dong
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
35
389
0
01 Jan 2020
Data-Free Quantization Through Weight Equalization and Bias Correction
Markus Nagel
M. V. Baalen
Tijmen Blankevoort
Max Welling
MQ
19
500
0
11 Jun 2019
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
1