ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.01134
  4. Cited By
WRPN: Wide Reduced-Precision Networks

WRPN: Wide Reduced-Precision Networks

4 September 2017
Asit K. Mishra
Eriko Nurvitadhi
Jeffrey J. Cook
Debbie Marr
    MQ
ArXivPDFHTML

Papers citing "WRPN: Wide Reduced-Precision Networks"

11 / 61 papers shown
Title
Relaxed Quantization for Discretized Neural Networks
Relaxed Quantization for Discretized Neural Networks
Christos Louizos
M. Reisser
Tijmen Blankevoort
E. Gavves
Max Welling
MQ
30
131
0
03 Oct 2018
A Survey on Methods and Theories of Quantized Neural Networks
A Survey on Methods and Theories of Quantized Neural Networks
Yunhui Guo
MQ
29
230
0
13 Aug 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
75
0
17 Jul 2018
Quantizing deep convolutional networks for efficient inference: A
  whitepaper
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
46
992
0
21 Jun 2018
Scalable Methods for 8-bit Training of Neural Networks
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
37
331
0
25 May 2018
UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural
  Networks
UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks
Chaim Baskin
Eli Schwartz
Evgenii Zheltonozhskii
Natan Liss
Raja Giryes
A. Bronstein
A. Mendelson
MQ
19
45
0
29 Apr 2018
Value-aware Quantization for Training and Inference of Neural Networks
Value-aware Quantization for Training and Inference of Neural Networks
Eunhyeok Park
S. Yoo
Peter Vajda
MQ
11
158
0
20 Apr 2018
Bit Fusion: Bit-Level Dynamically Composable Architecture for
  Accelerating Deep Neural Networks
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
Hardik Sharma
Jongse Park
Naveen Suda
Liangzhen Lai
Benson Chau
Joo-Young Kim
Vikas Chandra
H. Esmaeilzadeh
MQ
26
486
0
05 Dec 2017
Mixed Precision Training
Mixed Precision Training
Paulius Micikevicius
Sharan Narang
Jonah Alben
G. Diamos
Erich Elsen
...
Boris Ginsburg
Michael Houston
Oleksii Kuchaiev
Ganesh Venkatesh
Hao Wu
68
1,762
0
10 Oct 2017
NullHop: A Flexible Convolutional Neural Network Accelerator Based on
  Sparse Representations of Feature Maps
NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps
Alessandro Aimar
Hesham Mostafa
Enrico Calabrese
A. Rios-Navarro
Ricardo Tapiador-Morales
...
Moritz B. Milde
Federico Corradi
A. Linares-Barranco
Shih-Chii Liu
T. Delbruck
80
242
0
05 Jun 2017
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
Previous
12