ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.08011
  4. Cited By
Training Deep Neural Networks with 8-bit Floating Point Numbers

Training Deep Neural Networks with 8-bit Floating Point Numbers

19 December 2018
Naigang Wang
Jungwook Choi
D. Brand
Chia-Yu Chen
K. Gopalakrishnan
    MQ
ArXivPDFHTML

Papers citing "Training Deep Neural Networks with 8-bit Floating Point Numbers"

22 / 72 papers shown
Title
Improved stochastic rounding
Improved stochastic rounding
Lu Xia
M. Anthonissen
M. Hochstenbach
B. Koren
6
5
0
31 May 2020
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost
  Computation
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Yang Katie Zhao
Xiaohan Chen
Yue Wang
Chaojian Li
Haoran You
Y. Fu
Yuan Xie
Zhangyang Wang
Yingyan Lin
MQ
40
43
0
07 May 2020
COVID-MobileXpert: On-Device COVID-19 Patient Triage and Follow-up using
  Chest X-rays
COVID-MobileXpert: On-Device COVID-19 Patient Triage and Follow-up using Chest X-rays
Xin Li
Chengyin Li
D. Zhu
24
79
0
06 Apr 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
15
92
0
24 Mar 2020
Taurus: A Data Plane Architecture for Per-Packet ML
Taurus: A Data Plane Architecture for Per-Packet ML
Tushar Swamy
Alexander Rucker
M. Shahbaz
Ishan Gaur
K. Olukotun
21
82
0
12 Feb 2020
Shifted and Squeezed 8-bit Floating Point format for Low-Precision
  Training of Deep Neural Networks
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Léopold Cambier
Anahita Bhiwandiwalla
Ting Gong
M. Nekuii
Oguz H. Elibol
Hanlin Tang
MQ
21
48
0
16 Jan 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
34
73
0
07 Jan 2020
Towards Unified INT8 Training for Convolutional Neural Network
Towards Unified INT8 Training for Convolutional Neural Network
Feng Zhu
Ruihao Gong
F. Yu
Xianglong Liu
Yanfei Wang
Zhelong Li
Xiuqi Yang
Junjie Yan
MQ
35
150
0
29 Dec 2019
PANTHER: A Programmable Architecture for Neural Network Training
  Harnessing Energy-efficient ReRAM
PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM
Aayush Ankit
I. E. Hajj
S. R. Chalamalasetti
S. Agarwal
M. Marinella
M. Foltin
J. Strachan
D. Milojicic
Wen-mei W. Hwu
Kaushik Roy
21
65
0
24 Dec 2019
On-Device Machine Learning: An Algorithms and Learning Theory
  Perspective
On-Device Machine Learning: An Algorithms and Learning Theory Perspective
Sauptik Dhar
Junyao Guo
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
28
141
0
02 Nov 2019
Secure Evaluation of Quantized Neural Networks
Secure Evaluation of Quantized Neural Networks
Anders Dalskov
Daniel E. Escudero
Marcel Keller
17
137
0
28 Oct 2019
QPyTorch: A Low-Precision Arithmetic Simulation Framework
QPyTorch: A Low-Precision Arithmetic Simulation Framework
Tianyi Zhang
Zhiqiu Lin
Guandao Yang
Christopher De Sa
MQ
29
64
0
09 Oct 2019
Training Deep Neural Networks Using Posit Number System
Training Deep Neural Networks Using Posit Number System
Jinming Lu
Siyuan Lu
Zhisheng Wang
Chao Fang
Jun Lin
Zhongfeng Wang
Li Du
MQ
27
13
0
06 Sep 2019
Deep Learning Training on the Edge with Low-Precision Posits
Deep Learning Training on the Edge with Low-Precision Posits
H. F. Langroudi
Zachariah Carmichael
Dhireesha Kudithipudi
MQ
21
14
0
30 Jul 2019
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
49
92
0
27 Jul 2019
5 Parallel Prism: A topology for pipelined implementations of
  convolutional neural networks using computational memory
5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
M. Dazzi
Abu Sebastian
P. Francese
Thomas Parnell
Luca Benini
E. Eleftheriou
GNN
13
8
0
08 Jun 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural
  Networks Under Hardware Fault Attacks
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
22
210
0
03 Jun 2019
Mixed Precision Training With 8-bit Floating Point
Mixed Precision Training With 8-bit Floating Point
Naveen Mellempudi
Sudarshan Srinivasan
Dipankar Das
Bharat Kaul
MQ
18
68
0
29 May 2019
SWALP : Stochastic Weight Averaging in Low-Precision Training
SWALP : Stochastic Weight Averaging in Low-Precision Training
Guandao Yang
Tianyi Zhang
Polina Kirichenko
Junwen Bai
A. Wilson
Christopher De Sa
24
94
0
26 Apr 2019
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep
  Networks
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Charbel Sakr
Naigang Wang
Chia-Yu Chen
Jungwook Choi
A. Agrawal
Naresh R Shanbhag
K. Gopalakrishnan
MQ
30
34
0
19 Jan 2019
A note on solving nonlinear optimization problems in variable precision
A note on solving nonlinear optimization problems in variable precision
Serge Gratton
P. Toint
24
13
0
09 Dec 2018
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,746
0
26 Sep 2016
Previous
12