Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1702.08231
Cited By
Low-Precision Batch-Normalized Activations
27 February 2017
Benjamin Graham
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Low-Precision Batch-Normalized Activations"
7 / 7 papers shown
Title
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation
Ahmad Shawahna
S. M. Sait
A. El-Maleh
Irfan Ahmad
MQ
20
7
0
22 Mar 2022
Enabling Binary Neural Network Training on the Edge
Erwei Wang
James J. Davis
Daniele Moro
Piotr Zielinski
Jia Jie Lim
C. Coelho
S. Chatterjee
P. Cheung
George A. Constantinides
MQ
25
24
0
08 Feb 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
678
0
24 Jan 2021
Constructing Energy-efficient Mixed-precision Neural Networks through Principal Component Analysis for Edge Intelligence
I. Chakraborty
Deboleena Roy
Isha Garg
Aayush Ankit
Kaushik Roy
27
37
0
04 Jun 2019
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Jungwook Choi
Zhuo Wang
Swagath Venkataramani
P. Chuang
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
13
941
0
16 May 2018
WRPN: Wide Reduced-Precision Networks
Asit K. Mishra
Eriko Nurvitadhi
Jeffrey J. Cook
Debbie Marr
MQ
39
266
0
04 Sep 2017
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Zhehuai Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,750
0
26 Sep 2016
1