ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.01025
  4. Cited By
Convolutional Neural Networks using Logarithmic Data Representation

Convolutional Neural Networks using Logarithmic Data Representation

3 March 2016
Daisuke Miyashita
Edward H. Lee
B. Murmann
    MQ
ArXivPDFHTML

Papers citing "Convolutional Neural Networks using Logarithmic Data Representation"

50 / 65 papers shown
Title
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
Han Guo
William Brandon
Radostin Cholakov
Jonathan Ragan-Kelley
Eric P. Xing
Yoon Kim
MQ
91
12
0
20 Jan 2025
Histogram-Equalized Quantization for logic-gated Residual Neural Networks
Histogram-Equalized Quantization for logic-gated Residual Neural Networks
Van Thien Nguyen
William Guicquero
Gilles Sicard
MQ
41
1
0
10 Jan 2025
Learning from Students: Applying t-Distributions to Explore Accurate and
  Efficient Formats for LLMs
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Jordan Dotzel
Yuzong Chen
Bahaa Kotb
Sushma Prasad
Gang Wu
Sheng Li
Mohamed S. Abdelfattah
Zhiru Zhang
31
8
0
06 May 2024
EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural
  Network Acceleration
EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural Network Acceleration
Bo Liu
Grace Li Zhang
Xunzhao Yin
Ulf Schlichtmann
Bing Li
MQ
AI4CE
38
0
0
25 Feb 2024
NUPES : Non-Uniform Post-Training Quantization via Power Exponent Search
NUPES : Non-Uniform Post-Training Quantization via Power Exponent Search
Edouard Yvinec
Arnaud Dapogny
Kévin Bailly
MQ
24
6
0
10 Aug 2023
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural
  Networks
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks
Cheng Gong
Ye Lu
Surong Dai
Deng Qian
Chenkun Du
Tao Li
MQ
29
0
0
07 Apr 2023
Blockwise Compression of Transformer-based Models without Retraining
Blockwise Compression of Transformer-based Models without Retraining
Gaochen Dong
W. Chen
18
3
0
04 Apr 2023
Block-wise Bit-Compression of Transformer-based Models
Gaochen Dong
W. Chen
24
0
0
16 Mar 2023
MetaGrad: Adaptive Gradient Quantization with Hypernetworks
MetaGrad: Adaptive Gradient Quantization with Hypernetworks
Kaixin Xu
Alina Hui Xiu Lee
Ziyuan Zhao
Zhe Wang
Min-man Wu
Weisi Lin
MQ
22
1
0
04 Mar 2023
Oscillation-free Quantization for Low-bit Vision Transformers
Oscillation-free Quantization for Low-bit Vision Transformers
Shi Liu
Zechun Liu
Kwang-Ting Cheng
MQ
23
34
0
04 Feb 2023
MinUn: Accurate ML Inference on Microcontrollers
MinUn: Accurate ML Inference on Microcontrollers
Shikhar Jaiswal
R. Goli
Aayan Kumar
Vivek Seshadri
Rahul Sharma
26
2
0
29 Oct 2022
Energy Efficient Hardware Acceleration of Neural Networks with
  Power-of-Two Quantisation
Energy Efficient Hardware Acceleration of Neural Networks with Power-of-Two Quantisation
Dominika Przewlocka-Rus
T. Kryjak
MQ
15
5
0
30 Sep 2022
FP8 Formats for Deep Learning
FP8 Formats for Deep Learning
Paulius Micikevicius
Dusan Stosic
N. Burgess
Marius Cornea
Pradeep Dubey
...
Naveen Mellempudi
S. Oberman
M. Shoeybi
Michael Siu
Hao Wu
BDL
VLM
MQ
74
122
0
12 Sep 2022
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural
  Network Quantization
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Cong Guo
Chen Zhang
Jingwen Leng
Zihan Liu
Fan Yang
Yun-Bo Liu
Minyi Guo
Yuhao Zhu
MQ
20
55
0
30 Aug 2022
SDQ: Stochastic Differentiable Quantization with Mixed Precision
SDQ: Stochastic Differentiable Quantization with Mixed Precision
Xijie Huang
Zhiqiang Shen
Shichao Li
Zechun Liu
Xianghong Hu
Jeffry Wicaksana
Eric P. Xing
Kwang-Ting Cheng
MQ
19
33
0
09 Jun 2022
8-bit Numerical Formats for Deep Neural Networks
8-bit Numerical Formats for Deep Neural Networks
Badreddine Noune
Philip Jones
Daniel Justus
Dominic Masters
Carlo Luschi
MQ
23
33
0
06 Jun 2022
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training
  Quantization
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training Quantization
Hongyi Yao
Pu Li
Jian Cao
Xiangcheng Liu
Chenying Xie
Bin Wang
MQ
26
12
0
26 Apr 2022
FxP-QNet: A Post-Training Quantizer for the Design of Mixed
  Low-Precision DNNs with Dynamic Fixed-Point Representation
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation
Ahmad Shawahna
S. M. Sait
A. El-Maleh
Irfan Ahmad
MQ
20
6
0
22 Mar 2022
Power-of-Two Quantization for Low Bitwidth and Hardware Compliant Neural
  Networks
Power-of-Two Quantization for Low Bitwidth and Hardware Compliant Neural Networks
Dominika Przewlocka-Rus
Syed Shakib Sarwar
H. Sumbul
Yuecheng Li
B. D. Salvo
MQ
35
29
0
09 Mar 2022
Standard Deviation-Based Quantization for Deep Neural Networks
Standard Deviation-Based Quantization for Deep Neural Networks
Amir Ardakani
A. Ardakani
B. Meyer
J. Clark
W. Gross
MQ
52
1
0
24 Feb 2022
Accurate Neural Training with 4-bit Matrix Multiplications at Standard
  Formats
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats
Brian Chmiel
Ron Banner
Elad Hoffer
Hilla Ben Yaacov
Daniel Soudry
MQ
33
22
0
19 Dec 2021
FastSGD: A Fast Compressed SGD Framework for Distributed Machine
  Learning
FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning
Keyu Yang
Lu Chen
Zhihao Zeng
Yunjun Gao
20
9
0
08 Dec 2021
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via
  Generalized Straight-Through Estimation
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation
Zechun Liu
Kwang-Ting Cheng
Dong Huang
Eric P. Xing
Zhiqiang Shen
MQ
25
103
0
29 Nov 2021
Phantom: A High-Performance Computational Core for Sparse Convolutional
  Neural Networks
Phantom: A High-Performance Computational Core for Sparse Convolutional Neural Networks
Mahmood Azhar Qureshi
Arslan Munir
30
0
0
09 Nov 2021
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization
  framework for FPGA
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
Sung-En Chang
Yanyu Li
Mengshu Sun
Yanzhi Wang
Xue Lin
MQ
6
1
0
30 Oct 2021
Elastic Significant Bit Quantization and Acceleration for Deep Neural
  Networks
Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks
Cheng Gong
Ye Lu
Kunpeng Xie
Zongming Jin
Tao Li
Yanzhi Wang
MQ
27
7
0
08 Sep 2021
On the Accuracy of Analog Neural Network Inference Accelerators
On the Accuracy of Analog Neural Network Inference Accelerators
T. Xiao
Ben Feinberg
C. Bennett
V. Prabhakar
Prashant Saxena
V. Agrawal
S. Agarwal
M. Marinella
30
33
0
03 Sep 2021
LNS-Madam: Low-Precision Training in Logarithmic Number System using
  Multiplicative Weight Update
LNS-Madam: Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao
Steve Dai
Rangharajan Venkatesan
Brian Zimmer
Mustafa Ali
Xuan Li
Brucek Khailany
B. Dally
Anima Anandkumar
MQ
39
13
0
26 Jun 2021
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of
  Quantization on Depthwise Separable Convolutional Networks Through the Eyes
  of Multi-scale Distributional Dynamics
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of Quantization on Depthwise Separable Convolutional Networks Through the Eyes of Multi-scale Distributional Dynamics
S. Yun
Alexander Wong
MQ
19
25
0
24 Apr 2021
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Kohei Yamamoto
MQ
36
63
0
12 Mar 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision
  Neural Network Inference
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
27
67
0
08 Feb 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
675
0
24 Jan 2021
Direct Quantization for Training Highly Accurate Low Bit-width Deep
  Neural Networks
Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks
Ziquan Liu
Wuguannan Yao
Qiao Li
Antoni B. Chan
MQ
24
9
0
26 Dec 2020
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Muhammad Shafique
BDL
59
140
0
21 Dec 2020
DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image
  Super-Resolution Networks
DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution Networks
Chee Hong
Heewon Kim
Sungyong Baik
Junghun Oh
Kyoung Mu Lee
OOD
SupR
MQ
24
41
0
21 Dec 2020
Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization
  Framework
Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Sung-En Chang
Yanyu Li
Mengshu Sun
Runbin Shi
Hayden Kwok-Hay So
Xuehai Qian
Yanzhi Wang
Xue Lin
MQ
23
82
0
08 Dec 2020
Where Should We Begin? A Low-Level Exploration of Weight Initialization
  Impact on Quantized Behaviour of Deep Neural Networks
Where Should We Begin? A Low-Level Exploration of Weight Initialization Impact on Quantized Behaviour of Deep Neural Networks
S. Yun
A. Wong
MQ
9
4
0
30 Nov 2020
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network
  Quantization Framework
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network Quantization Framework
Sung-En Chang
Yanyu Li
Mengshu Sun
Weiwen Jiang
Runbin Shi
Xue Lin
Yanzhi Wang
MQ
27
7
0
16 Sep 2020
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
H. Habi
Roy H. Jennings
Arnon Netzer
MQ
29
65
0
20 Jul 2020
AQD: Towards Accurate Fully-Quantized Object Detection
AQD: Towards Accurate Fully-Quantized Object Detection
Peng Chen
Jing Liu
Bohan Zhuang
Mingkui Tan
Chunhua Shen
MQ
29
10
0
14 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
59
82
0
02 Jul 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
34
47
0
07 Jan 2020
Neural Network Training with Approximate Logarithmic Computations
Neural Network Training with Approximate Logarithmic Computations
Arnab Sanyal
P. Beerel
K. Chugg
27
11
0
22 Oct 2019
Training Deep Neural Networks Using Posit Number System
Training Deep Neural Networks Using Posit Number System
Jinming Lu
Siyuan Lu
Zhisheng Wang
Chao Fang
Jun Lin
Zhongfeng Wang
Li Du
MQ
19
13
0
06 Sep 2019
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit
  Neural Networks
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
Ruihao Gong
Xianglong Liu
Shenghu Jiang
Tian-Hao Li
Peng Hu
Jiazhen Lin
F. Yu
Junjie Yan
MQ
32
446
0
14 Aug 2019
GDRQ: Group-based Distribution Reshaping for Quantization
GDRQ: Group-based Distribution Reshaping for Quantization
Haibao Yu
Tuopu Wen
Guangliang Cheng
Jiankai Sun
Qi Han
Jianping Shi
MQ
33
3
0
05 Aug 2019
QUOTIENT: Two-Party Secure Neural Network Training and Prediction
QUOTIENT: Two-Party Secure Neural Network Training and Prediction
Nitin Agrawal
Ali Shahin Shamsabadi
Matt J. Kusner
Adria Gascon
27
212
0
08 Jul 2019
Data-Free Quantization Through Weight Equalization and Bias Correction
Data-Free Quantization Through Weight Equalization and Bias Correction
Markus Nagel
M. V. Baalen
Tijmen Blankevoort
Max Welling
MQ
19
500
0
11 Jun 2019
SWALP : Stochastic Weight Averaging in Low-Precision Training
SWALP : Stochastic Weight Averaging in Low-Precision Training
Guandao Yang
Tianyi Zhang
Polina Kirichenko
Junwen Bai
A. Wilson
Christopher De Sa
24
94
0
26 Apr 2019
Focused Quantization for Sparse CNNs
Focused Quantization for Sparse CNNs
Yiren Zhao
Xitong Gao
Daniel Bates
Robert D. Mullins
Chengzhong Xu
MQ
20
26
0
07 Mar 2019
12
Next