ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.01526
  4. Cited By
Training DNNs with Hybrid Block Floating Point

Training DNNs with Hybrid Block Floating Point

4 April 2018
M. Drumond
Tao R. Lin
Martin Jaggi
Babak Falsafi
ArXivPDFHTML

Papers citing "Training DNNs with Hybrid Block Floating Point"

12 / 12 papers shown
Title
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
Sifan Zhou
Shuo Wang
Zhihang Yuan
Mingjia Shi
Yuzhang Shang
Dawei Yang
ALM
MQ
90
0
0
18 Feb 2025
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
44
5
0
31 May 2024
With Shared Microexponents, A Little Shifting Goes a Long Way
With Shared Microexponents, A Little Shifting Goes a Long Way
Bita Darvish Rouhani
Ritchie Zhao
V. Elango
Rasoul Shafipour
Mathew Hall
...
Eric S. Chung
Zhaoxia Deng
S. Naghshineh
Jongsoo Park
Maxim Naumov
MQ
43
36
0
16 Feb 2023
Approximating Continuous Convolutions for Deep Network Compression
Approximating Continuous Convolutions for Deep Network Compression
Theo W. Costain
V. Prisacariu
36
0
0
17 Oct 2022
Is Integer Arithmetic Enough for Deep Learning Training?
Is Integer Arithmetic Enough for Deep Learning Training?
Alireza Ghaffari
Marzieh S. Tahaei
Mohammadreza Tayaranian
M. Asgharian
V. Nia
MQ
11
16
0
18 Jul 2022
Adaptive Block Floating-Point for Analog Deep Learning Hardware
Adaptive Block Floating-Point for Analog Deep Learning Hardware
Ayon Basumallik
D. Bunandar
Nicholas Dronen
Nicholas Harris
Ludmila Levkova
Calvin McCarter
Lakshmi Nair
David Walter
David Widemann
14
6
0
12 May 2022
FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems
FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems
Rui Ma
E. Georganas
A. Heinecke
Andrew Boutros
Eriko Nurvitadhi
GNN
24
12
0
22 Apr 2022
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block
  Floating Point Support
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Seock-Hwan Noh
Jahyun Koo
Seunghyun Lee
Jongse Park
Jaeha Kung
AI4CE
32
17
0
13 Mar 2022
8-bit Optimizers via Block-wise Quantization
8-bit Optimizers via Block-wise Quantization
Tim Dettmers
M. Lewis
Sam Shleifer
Luke Zettlemoyer
MQ
34
270
0
06 Oct 2021
A Statistical Framework for Low-bitwidth Training of Deep Neural
  Networks
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Jianfei Chen
Yujie Gai
Z. Yao
Michael W. Mahoney
Joseph E. Gonzalez
MQ
12
58
0
27 Oct 2020
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
20
15
0
15 Oct 2020
DSConv: Efficient Convolution Operator
DSConv: Efficient Convolution Operator
Marcelo Gennari
Roger Fawcett
V. Prisacariu
MQ
26
62
0
07 Jan 2019
1