ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.12607
  4. Cited By
Towards Unified INT8 Training for Convolutional Neural Network

Towards Unified INT8 Training for Convolutional Neural Network

29 December 2019
Feng Zhu
Ruihao Gong
F. Yu
Xianglong Liu
Yanfei Wang
Zhelong Li
Xiuqi Yang
Junjie Yan
    MQ
ArXivPDFHTML

Papers citing "Towards Unified INT8 Training for Convolutional Neural Network"

23 / 73 papers shown
Title
Resource-Efficient Deep Learning: A Survey on Model-, Arithmetic-, and
  Implementation-Level Techniques
Resource-Efficient Deep Learning: A Survey on Model-, Arithmetic-, and Implementation-Level Techniques
JunKyu Lee
L. Mukhanov
A. S. Molahosseini
U. Minhas
Yang Hua
Jesus Martinez del Rincon
K. Dichev
Cheol-Ho Hong
Hans Vandierendonck
41
29
0
30 Dec 2021
Training Quantized Deep Neural Networks via Cooperative Coevolution
Training Quantized Deep Neural Networks via Cooperative Coevolution
Fu Peng
Shengcai Liu
Ning Lu
Ke Tang
MQ
26
1
0
23 Dec 2021
Understanding and Overcoming the Challenges of Efficient Transformer
  Quantization
Understanding and Overcoming the Challenges of Efficient Transformer Quantization
Yelysei Bondarenko
Markus Nagel
Tijmen Blankevoort
MQ
25
133
0
27 Sep 2021
Distribution-sensitive Information Retention for Accurate Binary Neural
  Network
Distribution-sensitive Information Retention for Accurate Binary Neural Network
Haotong Qin
Xiangguo Zhang
Ruihao Gong
Yifu Ding
Yi Xu
Xianglong Liu
MQ
25
84
0
25 Sep 2021
Smoothed Differential Privacy
Smoothed Differential Privacy
Ao Liu
Yu-Xiang Wang
Lirong Xia
33
0
0
04 Jul 2021
LNS-Madam: Low-Precision Training in Logarithmic Number System using
  Multiplicative Weight Update
LNS-Madam: Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao
Steve Dai
Rangharajan Venkatesan
Brian Zimmer
Mustafa Ali
Xuan Li
Brucek Khailany
B. Dally
Anima Anandkumar
MQ
39
13
0
26 Jun 2021
AirNet: Neural Network Transmission over the Air
AirNet: Neural Network Transmission over the Air
Mikolaj Jankowski
Deniz Gunduz
K. Mikolajczyk
68
1
0
24 May 2021
In-Hindsight Quantization Range Estimation for Quantized Training
In-Hindsight Quantization Range Estimation for Quantized Training
Marios Fournarakis
Markus Nagel
MQ
14
10
0
10 May 2021
Inspect, Understand, Overcome: A Survey of Practical Methods for AI
  Safety
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Sebastian Houben
Stephanie Abrecht
Maram Akila
Andreas Bär
Felix Brockherde
...
Serin Varghese
Michael Weber
Sebastian J. Wirkert
Tim Wirtz
Matthias Woehrle
AAML
13
58
0
29 Apr 2021
Faster Convolution Inference Through Using Pre-Calculated Lookup Tables
Faster Convolution Inference Through Using Pre-Calculated Lookup Tables
Grigor Gatchev
V. Mollov
VLM
8
0
0
04 Apr 2021
Zero-shot Adversarial Quantization
Zero-shot Adversarial Quantization
Yuang Liu
Wei Zhang
Jun Wang
MQ
11
77
0
29 Mar 2021
Diversifying Sample Generation for Accurate Data-Free Quantization
Diversifying Sample Generation for Accurate Data-Free Quantization
Xiangguo Zhang
Haotong Qin
Yifu Ding
Ruihao Gong
Qing Yan
Renshuai Tao
Yuhang Li
F. Yu
Xianglong Liu
MQ
56
94
0
01 Mar 2021
Distribution Adaptive INT8 Quantization for Training CNNs
Distribution Adaptive INT8 Quantization for Training CNNs
Kang Zhao
Sida Huang
Pan Pan
Yinghan Li
Yingya Zhang
Zhenyu Gu
Yinghui Xu
MQ
22
62
0
09 Feb 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
674
0
24 Jan 2021
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Y. Fu
Haoran You
Yang Katie Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
MQ
38
32
0
24 Dec 2020
Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization
  Framework
Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Sung-En Chang
Yanyu Li
Mengshu Sun
Runbin Shi
Hayden Kwok-Hay So
Xuehai Qian
Yanzhi Wang
Xue Lin
MQ
20
82
0
08 Dec 2020
A Statistical Framework for Low-bitwidth Training of Deep Neural
  Networks
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Jianfei Chen
Yujie Gai
Z. Yao
Michael W. Mahoney
Joseph E. Gonzalez
MQ
12
58
0
27 Oct 2020
BiPointNet: Binary Neural Network for Point Clouds
BiPointNet: Binary Neural Network for Point Clouds
Haotong Qin
Zhongang Cai
Mingyuan Zhang
Yifu Ding
Haiyu Zhao
Shuai Yi
Xianglong Liu
Hao Su
3DPC
30
50
0
12 Oct 2020
QuantNet: Learning to Quantize by Learning within Fully Differentiable
  Framework
QuantNet: Learning to Quantize by Learning within Fully Differentiable Framework
Junjie Liu
Dongchao Wen
Deyu Wang
Wei Tao
Tse-Wei Chen
Kinya Osa
Masami Kato
MQ
20
3
0
10 Sep 2020
Binary Neural Networks: A Survey
Binary Neural Networks: A Survey
Haotong Qin
Ruihao Gong
Xianglong Liu
Xiao Bai
Jingkuan Song
N. Sebe
MQ
50
457
0
31 Mar 2020
Forward and Backward Information Retention for Accurate Binary Neural
  Networks
Forward and Backward Information Retention for Accurate Binary Neural Networks
Haotong Qin
Ruihao Gong
Xianglong Liu
Mingzhu Shen
Ziran Wei
F. Yu
Jingkuan Song
MQ
131
324
0
24 Sep 2019
Training High-Performance and Large-Scale Deep Neural Networks with Full
  8-bit Integers
Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers
Yukuan Yang
Shuang Wu
Lei Deng
Tianyi Yan
Yuan Xie
Guoqi Li
MQ
99
110
0
05 Sep 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
Previous
12