ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.07941
  4. Cited By
Quantizing Convolutional Neural Networks for Low-Power High-Throughput
  Inference Engines

Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines

21 May 2018
S. Settle
Manasa Bollavaram
P. DÁlberto
Elliott Delaye
Oscar Fernández
Nicholas J. Fraser
A. Ng
Ashish Sirasao
Michael Wu
    MQ
ArXiv (abs)PDFHTML

Papers citing "Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines"

4 / 4 papers shown
Title
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
252
710
0
24 Jan 2021
Fighting Quantization Bias With Bias
Fighting Quantization Bias With Bias
Alexander Finkelstein
Uri Almog
Mark Grobman
MQ
82
57
0
07 Jun 2019
Improving Neural Network Quantization without Retraining using Outlier
  Channel Splitting
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
Ritchie Zhao
Yuwei Hu
Jordan Dotzel
Christopher De Sa
Zhiru Zhang
OODDMQ
152
312
0
28 Jan 2019
Composite Binary Decomposition Networks
Composite Binary Decomposition Networks
You Qiaoben
Ziyi Wang
Jianguo Li
Yinpeng Dong
Yu-Gang Jiang
Jun Zhu
MQ
21
0
0
16 Nov 2018
1