ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.04680
  4. Cited By
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations

Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations

10 August 2019
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
    MQ
ArXivPDFHTML

Papers citing "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"

12 / 12 papers shown
Title
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
21
1
0
15 Jan 2023
Vertical Layering of Quantized Neural Networks for Heterogeneous
  Inference
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Hai Wu
Ruifei He
Hao Hao Tan
Xiaojuan Qi
Kaibin Huang
MQ
24
2
0
10 Dec 2022
Class-based Quantization for Neural Networks
Class-based Quantization for Neural Networks
Wenhao Sun
Grace Li Zhang
Huaxi Gu
Bing Li
Ulf Schlichtmann
MQ
24
7
0
27 Nov 2022
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural
  Network Quantization
ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Cong Guo
Chen Zhang
Jingwen Leng
Zihan Liu
Fan Yang
Yun-Bo Liu
Minyi Guo
Yuhao Zhu
MQ
20
55
0
30 Aug 2022
Efficient Adaptive Activation Rounding for Post-Training Quantization
Efficient Adaptive Activation Rounding for Post-Training Quantization
Zhengyi Li
Cong Guo
Zhanda Zhu
Yangjie Zhou
Yuxian Qiu
Xiaotian Gao
Jingwen Leng
Minyi Guo
MQ
30
3
0
25 Aug 2022
Compact Model Training by Low-Rank Projection with Energy Transfer
Compact Model Training by Low-Rank Projection with Energy Transfer
K. Guo
Zhenquan Lin
Xiaofen Xing
Fang Liu
Xiangmin Xu
27
2
0
12 Apr 2022
Towards Efficient Post-training Quantization of Pre-trained Language
  Models
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
M. Lyu
MQ
79
47
0
30 Sep 2021
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
34
79
0
17 Sep 2020
Joint Multi-Dimension Pruning via Numerical Gradient Update
Joint Multi-Dimension Pruning via Numerical Gradient Update
Zechun Liu
Xinming Zhang
Zhiqiang Shen
Zhe Li
Yichen Wei
Kwang-Ting Cheng
Jian Sun
47
18
0
18 May 2020
Training Quantized Neural Networks with a Full-precision Auxiliary
  Module
Training Quantized Neural Networks with a Full-precision Auxiliary Module
Bohan Zhuang
Lingqiao Liu
Mingkui Tan
Chunhua Shen
Ian Reid
MQ
32
62
0
27 Mar 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
334
1,049
0
10 Feb 2017
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
271
5,329
0
05 Nov 2016
1