Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.10769
Cited By
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats
19 December 2021
Brian Chmiel
Ron Banner
Elad Hoffer
Hilla Ben Yaacov
Daniel Soudry
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats"
14 / 14 papers shown
Title
HOT: Hadamard-based Optimized Training
Seonggon Kim
Juncheol Shin
Seung-taek Woo
Eunhyeok Park
48
0
0
27 Mar 2025
Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization
Minsu Kim
Seongmin Hong
RyeoWook Ko
S. Choi
Hunjong Lee
Junsoo Kim
Joo-Young Kim
Jongse Park
57
0
0
24 Mar 2025
EXAQ: Exponent Aware Quantization For LLMs Acceleration
Moran Shkolnik
Maxim Fishman
Brian Chmiel
Hilla Ben-Yaacov
Ron Banner
Kfir Y. Levy
MQ
26
0
0
04 Oct 2024
HLQ: Fast and Efficient Backpropagation via Hadamard Low-rank Quantization
Seonggon Kim
Eunhyeok Park
48
2
0
21 Jun 2024
LoQT: Low Rank Adapters for Quantized Training
Sebastian Loeschcke
M. Toftrup
M. Kastoryano
Serge Belongie
Vésteinn Snæbjarnarson
MQ
42
0
0
26 May 2024
BOLD: Boolean Logic Deep Learning
Van Minh Nguyen
Cristian Ocampo
Aymen Askri
Louis Leconte
Ba-Hien Tran
AI4CE
40
0
0
25 May 2024
Boolean Logic as an Error feedback mechanism
Louis Leconte
22
0
0
29 Jan 2024
Boolean Variation and Boolean Logic BackPropagation
Van Minh Nguyen
27
1
0
13 Nov 2023
Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
Jangwhan Lee
Minsoo Kim
Seungcheol Baek
Seok Joong Hwang
Wonyong Sung
Jungwook Choi
MQ
13
17
0
09 Nov 2023
Hadamard Domain Training with Integers for Class Incremental Quantized Learning
Martin Schiemer
Clemens J. S. Schaefer
Jayden Parker Vap
Mark Horeni
Yu Emma Wang
Juan Ye
Siddharth Joshi
36
2
0
05 Oct 2023
Accuracy Booster: Enabling 4-bit Fixed-point Arithmetic for DNN Training
Simla Burcu Harma
Canberk Sonmez
Nicholas Sperry
Babak Falsafi
Martin Jaggi
Yunho Oh
MQ
39
4
0
19 Nov 2022
AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks
Louis Leconte
S. Schechtman
Eric Moulines
29
4
0
07 Nov 2022
Energy Efficient Hardware Acceleration of Neural Networks with Power-of-Two Quantisation
Dominika Przewlocka-Rus
T. Kryjak
MQ
13
5
0
30 Sep 2022
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
674
0
24 Jan 2021
1