ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.04680
  4. Cited By
Training and Inference with Integers in Deep Neural Networks

Training and Inference with Integers in Deep Neural Networks

13 February 2018
Shuang Wu
Guoqi Li
F. Chen
Luping Shi
    MQ
ArXivPDFHTML

Papers citing "Training and Inference with Integers in Deep Neural Networks"

50 / 153 papers shown
Title
LiMuSE: Lightweight Multi-modal Speaker Extraction
LiMuSE: Lightweight Multi-modal Speaker Extraction
Qinghua Liu
Yating Huang
Yunzhe Hao
Jiaming Xu
Bo Xu
43
6
0
07 Nov 2021
SDR: Efficient Neural Re-ranking using Succinct Document Representation
SDR: Efficient Neural Re-ranking using Succinct Document Representation
Nachshon Cohen
Amit Portnoy
B. Fetahu
A. Ingber
AI4TS
34
10
0
03 Oct 2021
Understanding and Overcoming the Challenges of Efficient Transformer
  Quantization
Understanding and Overcoming the Challenges of Efficient Transformer Quantization
Yelysei Bondarenko
Markus Nagel
Tijmen Blankevoort
MQ
25
133
0
27 Sep 2021
Efficient Visual Recognition with Deep Neural Networks: A Survey on
  Recent Advances and New Directions
Efficient Visual Recognition with Deep Neural Networks: A Survey on Recent Advances and New Directions
Yang Wu
Dingheng Wang
Xiaotong Lu
Fan Yang
Guoqi Li
W. Dong
Jianbo Shi
29
18
0
30 Aug 2021
LNS-Madam: Low-Precision Training in Logarithmic Number System using
  Multiplicative Weight Update
LNS-Madam: Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao
Steve Dai
Rangharajan Venkatesan
Brian Zimmer
Mustafa Ali
Xuan Li
Brucek Khailany
B. Dally
Anima Anandkumar
MQ
39
13
0
26 Jun 2021
CD-SGD: Distributed Stochastic Gradient Descent with Compression and
  Delay Compensation
CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
Enda Yu
Dezun Dong
Yemao Xu
Shuo Ouyang
Xiangke Liao
16
5
0
21 Jun 2021
Towards Efficient Full 8-bit Integer DNN Online Training on
  Resource-limited Devices without Batch Normalization
Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization
Yukuan Yang
Xiaowei Chi
Lei Deng
Tianyi Yan
Feng Gao
Guoqi Li
MQ
23
6
0
27 May 2021
In-Hindsight Quantization Range Estimation for Quantized Training
In-Hindsight Quantization Range Estimation for Quantized Training
Marios Fournarakis
Markus Nagel
MQ
16
10
0
10 May 2021
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffM
MQ
18
47
0
20 Apr 2021
RCT: Resource Constrained Training for Edge AI
RCT: Resource Constrained Training for Edge AI
Tian Huang
Tao Luo
Ming Yan
Qiufeng Wang
Rick Siow Mong Goh
33
8
0
26 Mar 2021
NEAT: A Framework for Automated Exploration of Floating Point
  Approximations
NEAT: A Framework for Automated Exploration of Floating Point Approximations
Saeid Barati
Lee Ehudin
Hank Hoffmann
20
2
0
17 Feb 2021
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware
  Transformation
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation
Chaofan Tao
Rui Lin
Quan Chen
Zhaoyang Zhang
Ping Luo
Ngai Wong
MQ
28
7
0
15 Feb 2021
Distribution Adaptive INT8 Quantization for Training CNNs
Distribution Adaptive INT8 Quantization for Training CNNs
Kang Zhao
Sida Huang
Pan Pan
Yinghan Li
Yingya Zhang
Zhenyu Gu
Yinghui Xu
MQ
24
62
0
09 Feb 2021
Enabling Binary Neural Network Training on the Edge
Enabling Binary Neural Network Training on the Edge
Erwei Wang
James J. Davis
Daniele Moro
Piotr Zielinski
Jia Jie Lim
C. Coelho
S. Chatterjee
P. Cheung
George A. Constantinides
MQ
20
24
0
08 Feb 2021
Fixed-point Quantization of Convolutional Neural Networks for Quantized
  Inference on Embedded Platforms
Fixed-point Quantization of Convolutional Neural Networks for Quantized Inference on Embedded Platforms
Rishabh Goyal
Joaquin Vanschoren
V. V. Acht
S. Nijssen
MQ
27
23
0
03 Feb 2021
Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Hamzah Abdel-Aziz
Ali Shafiee
J. Shin
A. Pedram
Joseph Hassoun
MQ
42
10
0
27 Jan 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
675
0
24 Jan 2021
Adaptive Precision Training for Resource Constrained Devices
Adaptive Precision Training for Resource Constrained Devices
Tian Huang
Tao Luo
Qiufeng Wang
36
5
0
23 Dec 2020
Empirical Evaluation of Deep Learning Model Compression Techniques on
  the WaveNet Vocoder
Empirical Evaluation of Deep Learning Model Compression Techniques on the WaveNet Vocoder
Sam Davis
Giuseppe Coccia
Sam Gooch
Julian Mack
14
0
0
20 Nov 2020
A Statistical Framework for Low-bitwidth Training of Deep Neural
  Networks
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Jianfei Chen
Yujie Gai
Z. Yao
Michael W. Mahoney
Joseph E. Gonzalez
MQ
14
58
0
27 Oct 2020
MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with
  Co-designed Compressed Neural Networks
MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks
Syuan-Hao Sie
Jye-Luen Lee
Yi-Ren Chen
Chih-Cheng Lu
C. Hsieh
Meng-Fan Chang
K. Tang
19
14
0
24 Oct 2020
Mixed-Precision Embedding Using a Cache
Mixed-Precision Embedding Using a Cache
J. Yang
Jianyu Huang
Jongsoo Park
P. T. P. Tang
Andrew Tulloch
24
36
0
21 Oct 2020
TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training
TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training
Reza Hojabr
Kamyar Givaki
Kossar Pourahmadi
Parsa Nooralinejad
A. Khonsari
Dara Rahmati
M. Najafi
6
4
0
11 Oct 2020
NITI: Training Integer Neural Networks Using Integer-only Arithmetic
NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Maolin Wang
Seyedramin Rasoulinezhad
Philip H. W. Leong
Hayden Kwok-Hay So
MQ
18
39
0
28 Sep 2020
Binarized Neural Architecture Search for Efficient Object Recognition
Binarized Neural Architecture Search for Efficient Object Recognition
Hanlin Chen
Lian Zhuo
Baochang Zhang
Xiawu Zheng
Jianzhuang Liu
Rongrong Ji
David Doermann
G. Guo
MQ
10
18
0
08 Sep 2020
An FPGA Accelerated Method for Training Feed-forward Neural Networks
  Using Alternating Direction Method of Multipliers and LSMR
An FPGA Accelerated Method for Training Feed-forward Neural Networks Using Alternating Direction Method of Multipliers and LSMR
Seyedeh Niusha Alavi Foumani
Ce Guo
Wayne Luk
19
3
0
06 Sep 2020
Dual Precision Deep Neural Network
Dual Precision Deep Neural Network
J. Park
J. Choi
J. Ko
14
1
0
02 Sep 2020
Improved Lite Audio-Visual Speech Enhancement
Improved Lite Audio-Visual Speech Enhancement
Shang-Yi Chuang
Hsin-Min Wang
Yu Tsao
28
32
0
30 Aug 2020
Optimal Quantization for Batch Normalization in Neural Network
  Deployments and Beyond
Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond
Dachao Lin
Peiqin Sun
Guangzeng Xie
Shuchang Zhou
Zhihua Zhang
MQ
11
2
0
30 Aug 2020
Training Sparse Neural Networks using Compressed Sensing
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
26
5
0
21 Aug 2020
Resource-Efficient Speech Mask Estimation for Multi-Channel Speech
  Enhancement
Resource-Efficient Speech Mask Estimation for Multi-Channel Speech Enhancement
Lukas Pfeifenberger
Matthias Zöhrer
Günther Schindler
Wolfgang Roth
Holger Fröning
Franz Pernkopf
14
1
0
22 Jul 2020
Differentiable Joint Pruning and Quantization for Hardware Efficiency
Differentiable Joint Pruning and Quantization for Hardware Efficiency
Ying Wang
Yadong Lu
Tijmen Blankevoort
MQ
22
71
0
20 Jul 2020
Hybrid Tensor Decomposition in Neural Network Compression
Hybrid Tensor Decomposition in Neural Network Compression
Bijiao Wu
Dingheng Wang
Guangshe Zhao
Lei Deng
Guoqi Li
30
46
0
29 Jun 2020
Learning compositional functions via multiplicative weight updates
Learning compositional functions via multiplicative weight updates
Jeremy Bernstein
Jiawei Zhao
M. Meister
Xuan Li
Anima Anandkumar
Yisong Yue
16
26
0
25 Jun 2020
Neural gradients are near-lognormal: improved quantized and sparse
  training
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
6
5
0
15 Jun 2020
Exploring the Potential of Low-bit Training of Convolutional Neural
  Networks
Exploring the Potential of Low-bit Training of Convolutional Neural Networks
Kai Zhong
Xuefei Ning
Guohao Dai
Zhenhua Zhu
Tianchen Zhao
Shulin Zeng
Yu Wang
Huazhong Yang
MQ
23
9
0
04 Jun 2020
Lite Audio-Visual Speech Enhancement
Lite Audio-Visual Speech Enhancement
Shang-Yi Chuang
Yu Tsao
Chen-Chou Lo
Hsin-Min Wang
16
24
0
24 May 2020
Bayesian Bits: Unifying Quantization and Pruning
Bayesian Bits: Unifying Quantization and Pruning
M. V. Baalen
Christos Louizos
Markus Nagel
Rana Ali Amjad
Ying Wang
Tijmen Blankevoort
Max Welling
MQ
16
114
0
14 May 2020
Quantized Adam with Error Feedback
Quantized Adam with Error Feedback
Congliang Chen
Li Shen
Haozhi Huang
Wei Liu
ODL
MQ
6
33
0
29 Apr 2020
Entropy-Based Modeling for Estimating Soft Errors Impact on Binarized
  Neural Network Inference
Entropy-Based Modeling for Estimating Soft Errors Impact on Binarized Neural Network Inference
N. Khoshavi
S. Sargolzaei
A. Roohi
Connor Broyles
Yu Bi
AAML
17
1
0
10 Apr 2020
DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for
  Compute-in-Memory Accelerators for On-chip Training
DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training
Xiaochen Peng
Shanshi Huang
Hongwu Jiang
A. Lu
Shimeng Yu
11
150
0
13 Mar 2020
Memory Organization for Energy-Efficient Learning and Inference in
  Digital Neuromorphic Accelerators
Memory Organization for Energy-Efficient Learning and Inference in Digital Neuromorphic Accelerators
Clemens J. S. Schaefer
Patrick Faley
Emre Neftci
S. Joshi
20
2
0
05 Mar 2020
Federated Learning for Resource-Constrained IoT Devices: Panoramas and
  State-of-the-art
Federated Learning for Resource-Constrained IoT Devices: Panoramas and State-of-the-art
Ahmed Imteaj
Urmish Thakker
Shiqiang Wang
Jian Li
M. Amini
8
59
0
25 Feb 2020
BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by
  Coupling Binary Activations
BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations
Hyungjun Kim
Kyungsu Kim
Jinseok Kim
Jae-Joon Kim
MQ
27
47
0
16 Feb 2020
Shifted and Squeezed 8-bit Floating Point format for Low-Precision
  Training of Deep Neural Networks
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Léopold Cambier
Anahita Bhiwandiwalla
Ting Gong
M. Nekuii
Oguz H. Elibol
Hanlin Tang
MQ
21
48
0
16 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
34
47
0
07 Jan 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
34
73
0
07 Jan 2020
Towards Unified INT8 Training for Convolutional Neural Network
Towards Unified INT8 Training for Convolutional Neural Network
Feng Zhu
Ruihao Gong
F. Yu
Xianglong Liu
Yanfei Wang
Zhelong Li
Xiuqi Yang
Junjie Yan
MQ
35
150
0
29 Dec 2019
PANTHER: A Programmable Architecture for Neural Network Training
  Harnessing Energy-efficient ReRAM
PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM
Aayush Ankit
I. E. Hajj
S. R. Chalamalasetti
S. Agarwal
M. Marinella
M. Foltin
J. Strachan
D. Milojicic
Wen-mei W. Hwu
Kaushik Roy
21
65
0
24 Dec 2019
Trajectory growth lower bounds for random sparse deep ReLU networks
Trajectory growth lower bounds for random sparse deep ReLU networks
Ilan Price
Jared Tanner
6
5
0
25 Nov 2019
Previous
1234
Next