ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.02213
  4. Cited By
Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep
  Neural Networks

Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

6 November 2017
Urs Koster
T. Webb
Xin Eric Wang
Marcel Nassar
Arjun K. Bansal
William Constable
Oguz H. Elibol
Scott Gray
Stewart Hall
Luke Hornof
A. Khosrowshahi
Carey Kloss
Ruby J. Pai
N. Rao
    MQ
ArXivPDFHTML

Papers citing "Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks"

35 / 35 papers shown
Title
Hierarchical Training of Deep Neural Networks Using Early Exiting
Hierarchical Training of Deep Neural Networks Using Early Exiting
Yamin Sepehri
P. Pad
A. C. Yüzügüler
P. Frossard
L. A. Dunbar
36
9
0
04 Mar 2023
The Hidden Power of Pure 16-bit Floating-Point Neural Networks
The Hidden Power of Pure 16-bit Floating-Point Neural Networks
Juyoung Yun
Byungkon Kang
Zhoulai Fu
MQ
26
1
0
30 Jan 2023
MinUn: Accurate ML Inference on Microcontrollers
MinUn: Accurate ML Inference on Microcontrollers
Shikhar Jaiswal
R. Goli
Aayan Kumar
Vivek Seshadri
Rahul Sharma
29
2
0
29 Oct 2022
Optimal Clipping and Magnitude-aware Differentiation for Improved
  Quantization-aware Training
Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Charbel Sakr
Steve Dai
Rangharajan Venkatesan
B. Zimmer
W. Dally
Brucek Khailany
MQ
27
41
0
13 Jun 2022
One-way Explainability Isn't The Message
One-way Explainability Isn't The Message
A. Srinivasan
Michael Bain
Enrico W. Coiera
21
2
0
05 May 2022
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block
  Floating Point Support
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Seock-Hwan Noh
Jahyun Koo
Seunghyun Lee
Jongse Park
Jaeha Kung
AI4CE
32
17
0
13 Mar 2022
Elastic Significant Bit Quantization and Acceleration for Deep Neural
  Networks
Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks
Cheng Gong
Ye Lu
Kunpeng Xie
Zongming Jin
Tao Li
Yanzhi Wang
MQ
27
7
0
08 Sep 2021
A Survey on GAN Acceleration Using Memory Compression Technique
A Survey on GAN Acceleration Using Memory Compression Technique
Dina Tantawy
Mohamed Zahran
A. Wassal
42
8
0
14 Aug 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
257
4,816
0
24 Feb 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision
  Neural Network Inference
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
33
68
0
08 Feb 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
678
0
24 Jan 2021
Reducing Inference Latency with Concurrent Architectures for Image
  Recognition
Reducing Inference Latency with Concurrent Architectures for Image Recognition
Ramyad Hadidi
Jiashen Cao
Michael S. Ryoo
Hyesoon Kim
BDL
14
3
0
13 Nov 2020
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
28
15
0
15 Oct 2020
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Sicong Zhuang
Cristiano Malossi
Marc Casas
27
0
0
05 Apr 2020
Shifted and Squeezed 8-bit Floating Point format for Low-Precision
  Training of Deep Neural Networks
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Léopold Cambier
Anahita Bhiwandiwalla
Ting Gong
M. Nekuii
Oguz H. Elibol
Hanlin Tang
MQ
23
48
0
16 Jan 2020
Towards Unified INT8 Training for Convolutional Neural Network
Towards Unified INT8 Training for Convolutional Neural Network
Feng Zhu
Ruihao Gong
F. Yu
Xianglong Liu
Yanfei Wang
Zhelong Li
Xiuqi Yang
Junjie Yan
MQ
40
151
0
29 Dec 2019
On-Device Machine Learning: An Algorithms and Learning Theory
  Perspective
On-Device Machine Learning: An Algorithms and Learning Theory Perspective
Sauptik Dhar
Junyao Guo
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
28
141
0
02 Nov 2019
MLPerf Training Benchmark
MLPerf Training Benchmark
Arya D. McCarthy
Christine Cheng
Cody Coleman
Greg Diamos
Paulius Micikevicius
...
Carole-Jean Wu
Lingjie Xu
Masafumi Yamazaki
C. Young
Matei A. Zaharia
47
307
0
02 Oct 2019
Automatic Compiler Based FPGA Accelerator for CNN Training
Automatic Compiler Based FPGA Accelerator for CNN Training
S. Venkataramanaiah
Yufei Ma
Shihui Yin
Eriko Nurvitadhi
A. Dasu
Yu Cao
Jae-sun Seo
32
38
0
15 Aug 2019
Deep Learning Training on the Edge with Low-Precision Posits
Deep Learning Training on the Edge with Low-Precision Posits
H. F. Langroudi
Zachariah Carmichael
Dhireesha Kudithipudi
MQ
21
14
0
30 Jul 2019
QUOTIENT: Two-Party Secure Neural Network Training and Prediction
QUOTIENT: Two-Party Secure Neural Network Training and Prediction
Nitin Agrawal
Ali Shahin Shamsabadi
Matt J. Kusner
Adria Gascon
30
212
0
08 Jul 2019
Data-Free Quantization Through Weight Equalization and Bias Correction
Data-Free Quantization Through Weight Equalization and Bias Correction
Markus Nagel
M. V. Baalen
Tijmen Blankevoort
Max Welling
MQ
19
502
0
11 Jun 2019
Mixed Precision Training With 8-bit Floating Point
Mixed Precision Training With 8-bit Floating Point
Naveen Mellempudi
Sudarshan Srinivasan
Dipankar Das
Bharat Kaul
MQ
18
69
0
29 May 2019
Accelerating Generalized Linear Models with MLWeaving: A
  One-Size-Fits-All System for Any-precision Learning (Technical Report)
Accelerating Generalized Linear Models with MLWeaving: A One-Size-Fits-All System for Any-precision Learning (Technical Report)
Zeke Wang
Kaan Kara
Hantian Zhang
Gustavo Alonso
O. Mutlu
Ce Zhang
31
34
0
08 Mar 2019
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep
  Networks
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Charbel Sakr
Naigang Wang
Chia-Yu Chen
Jungwook Choi
A. Agrawal
Naresh R Shanbhag
K. Gopalakrishnan
MQ
30
34
0
19 Jan 2019
Collaborative Execution of Deep Neural Networks on Internet of Things
  Devices
Collaborative Execution of Deep Neural Networks on Internet of Things Devices
Ramyad Hadidi
Jiashen Cao
Michael S. Ryoo
Hyesoon Kim
23
19
0
08 Jan 2019
DSConv: Efficient Convolution Operator
DSConv: Efficient Convolution Operator
Marcelo Gennari
Roger Fawcett
V. Prisacariu
MQ
32
62
0
07 Jan 2019
Rethinking floating point for deep learning
Rethinking floating point for deep learning
Jeff Johnson
MQ
19
138
0
01 Nov 2018
Training Deep Neural Network in Limited Precision
Training Deep Neural Network in Limited Precision
Hyunsun Park
J. Lee
Youngmin Oh
Sangwon Ha
Seungwon Lee
19
9
0
12 Oct 2018
Exploring the Vision Processing Unit as Co-processor for Inference
Exploring the Vision Processing Unit as Co-processor for Inference
Sergio Rivas-Gomez
Antonio J. Peña
D. Moloney
Erwin Laure
Stefano Markidis
BDL
19
22
0
09 Oct 2018
Low-Precision Floating-Point Schemes for Neural Network Training
Low-Precision Floating-Point Schemes for Neural Network Training
Marc Ortiz
A. Cristal
Eduard Ayguadé
Marc Casas
MQ
30
22
0
14 Apr 2018
Training DNNs with Hybrid Block Floating Point
Training DNNs with Hybrid Block Floating Point
M. Drumond
Tao R. Lin
Martin Jaggi
Babak Falsafi
25
95
0
04 Apr 2018
Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey
  and Future Directions
Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions
Stylianos I. Venieris
Alexandros Kouris
C. Bouganis
19
184
0
15 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
A Scalable Near-Memory Architecture for Training Deep Neural Networks on
  Large In-Memory Datasets
A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets
Fabian Schuiki
Michael Schaffner
Frank K. Gürkaynak
Luca Benini
31
70
0
19 Feb 2018
1