Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2001.05674
Cited By
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
16 January 2020
Léopold Cambier
Anahita Bhiwandiwalla
Ting Gong
M. Nekuii
Oguz H. Elibol
Hanlin Tang
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks"
17 / 17 papers shown
Title
Mixed Precision Training With 8-bit Floating Point
Naveen Mellempudi
Sudarshan Srinivasan
Dipankar Das
Bharat Kaul
MQ
32
69
0
29 May 2019
A Study of BFLOAT16 for Deep Learning Training
Dhiraj D. Kalamkar
Dheevatsa Mudigere
Naveen Mellempudi
Dipankar Das
K. Banerjee
...
Sudarshan Srinivasan
Abhisek Kundu
M. Smelyanskiy
Bharat Kaul
Pradeep Dubey
MQ
67
340
0
29 May 2019
Training Deep Neural Networks with 8-bit Floating Point Numbers
Naigang Wang
Jungwook Choi
D. Brand
Chia-Yu Chen
K. Gopalakrishnan
MQ
46
500
0
19 Dec 2018
Rethinking floating point for deep learning
Jeff Johnson
MQ
89
138
0
01 Nov 2018
A Survey on Methods and Theories of Quantized Neural Networks
Yunhui Guo
MQ
54
232
0
13 Aug 2018
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
70
335
0
25 May 2018
Tensor2Tensor for Neural Machine Translation
Ashish Vaswani
Samy Bengio
E. Brevdo
François Chollet
Aidan Gomez
...
Nal Kalchbrenner
Niki Parmar
Ryan Sepassi
Noam M. Shazeer
Jakob Uszkoreit
81
528
0
16 Mar 2018
NVIDIA Tensor Core Programmability, Performance & Precision
Stefano Markidis
Steven W. D. Chien
Erwin Laure
Ivy Bo Peng
Jeffrey S. Vetter
19
369
0
11 Mar 2018
Training and Inference with Integers in Deep Neural Networks
Shuang Wu
Guoqi Li
F. Chen
Luping Shi
MQ
52
390
0
13 Feb 2018
Mixed Precision Training of Convolutional Neural Networks using Integer Operations
Dipankar Das
Naveen Mellempudi
Dheevatsa Mudigere
Dhiraj D. Kalamkar
Sasikanth Avancha
...
J. Corbal
N. Shustrov
R. Dubtsov
Evarist Fomenko
V. Pirogov
MQ
52
154
0
03 Feb 2018
Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks
Urs Koster
T. Webb
Xin Eric Wang
Marcel Nassar
Arjun K. Bansal
...
Luke Hornof
A. Khosrowshahi
Carey Kloss
Ruby J. Pai
N. Rao
MQ
40
261
0
06 Nov 2017
Mixed Precision Training
Paulius Micikevicius
Sharan Narang
Jonah Alben
G. Diamos
Erich Elsen
...
Boris Ginsburg
Michael Houston
Oleksii Kuchaiev
Ganesh Venkatesh
Hao Wu
128
1,779
0
10 Oct 2017
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
430
129,831
0
12 Jun 2017
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
MQ
95
2,080
0
20 Jun 2016
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
279
10,149
0
16 Mar 2016
Deep Learning with Limited Numerical Precision
Suyog Gupta
A. Agrawal
K. Gopalakrishnan
P. Narayanan
HAI
124
2,041
0
09 Feb 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
776
149,474
0
22 Dec 2014
1