ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00205
  4. Cited By
Towards Effective Low-bitwidth Convolutional Neural Networks
v1v2 (latest)

Towards Effective Low-bitwidth Convolutional Neural Networks

1 November 2017
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
    MQ
ArXiv (abs)PDFHTML

Papers citing "Towards Effective Low-bitwidth Convolutional Neural Networks"

50 / 89 papers shown
Title
PQCAD-DM: Progressive Quantization and Calibration-Assisted Distillation for Extremely Efficient Diffusion Model
PQCAD-DM: Progressive Quantization and Calibration-Assisted Distillation for Extremely Efficient Diffusion Model
Beomseok Ko
Hyeryung Jang
MQ
22
0
0
20 Jun 2025
Saliency-Aware Quantized Imitation Learning for Efficient Robotic Control
Saliency-Aware Quantized Imitation Learning for Efficient Robotic Control
Seongmin Park
Hyungmin Kim
Sangwoo kim
Wonseok Jeon
Juyoung Yang
Byeongwook Jeon
Yoonseon Oh
Jungwook Choi
197
0
0
21 May 2025
PQD: Post-training Quantization for Efficient Diffusion Models
Jiaojiao Ye
Zhen Wang
Linnan Jiang
MQ
81
0
0
03 Jan 2025
Temporal Feature Matters: A Framework for Diffusion Model Quantization
Temporal Feature Matters: A Framework for Diffusion Model Quantization
Yushi Huang
Ruihao Gong
Xianglong Liu
Jing Liu
Yuhang Li
Jiwen Lu
Dacheng Tao
MQDiffM
124
0
0
28 Jul 2024
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit
  Diffusion Models
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models
Yefei He
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
DiffMMQ
162
51
0
05 Oct 2023
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Sheng Xu
Yanjing Li
Mingbao Lin
Penglei Gao
Guodong Guo
Jinhu Lu
Baochang Zhang
MQ
98
24
0
01 Apr 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
97
2
0
15 Jan 2023
QFT: Post-training quantization via fast joint finetuning of all degrees
  of freedom
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
79
7
0
05 Dec 2022
BiViT: Extremely Compressed Binary Vision Transformer
BiViT: Extremely Compressed Binary Vision Transformer
Yefei He
Zhenyu Lou
Luoming Zhang
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
ViTMQ
73
28
0
14 Nov 2022
Collaborative Multi-Teacher Knowledge Distillation for Learning Low
  Bit-width Deep Neural Networks
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedMLMQ
93
15
0
27 Oct 2022
CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
Chee Hong
Sungyong Baik
Heewon Kim
Seungjun Nah
Kyoung Mu Lee
SupRMQ
112
33
0
21 Jul 2022
Learnable Mixed-precision and Dimension Reduction Co-design for
  Low-storage Activation
Learnable Mixed-precision and Dimension Reduction Co-design for Low-storage Activation
Yu-Shan Tai
Cheng-Yang Chang
Chieh-Fang Teng
AnYeu
A. Wu
80
5
0
16 Jul 2022
BiT: Robustly Binarized Multi-distilled Transformer
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
Barlas Oğuz
Aasish Pappu
Lin Xiao
Scott Yih
Meng Li
Raghuraman Krishnamoorthi
Yashar Mehdad
MQ
128
55
0
25 May 2022
Binarizing by Classification: Is soft function really necessary?
Binarizing by Classification: Is soft function really necessary?
Yefei He
Luoming Zhang
Weijia Wu
Hong Zhou
MQ
118
3
0
16 May 2022
FxP-QNet: A Post-Training Quantizer for the Design of Mixed
  Low-Precision DNNs with Dynamic Fixed-Point Representation
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation
Ahmad Shawahna
S. M. Sait
A. El-Maleh
Irfan Ahmad
MQ
65
7
0
22 Mar 2022
Standard Deviation-Based Quantization for Deep Neural Networks
Standard Deviation-Based Quantization for Deep Neural Networks
Amir Ardakani
A. Ardakani
B. Meyer
J. Clark
W. Gross
MQ
93
1
0
24 Feb 2022
Neural Network Quantization for Efficient Inference: A Survey
Neural Network Quantization for Efficient Inference: A Survey
Olivia Weng
MQ
75
26
0
08 Dec 2021
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via
  Generalized Straight-Through Estimation
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation
Zechun Liu
Kwang-Ting Cheng
Dong Huang
Eric P. Xing
Zhiqiang Shen
MQ
91
111
0
29 Nov 2021
Sharpness-aware Quantization for Deep Neural Networks
Sharpness-aware Quantization for Deep Neural Networks
Jing Liu
Jianfei Cai
Bohan Zhuang
MQ
157
25
0
24 Nov 2021
Qimera: Data-free Quantization with Synthetic Boundary Supporting
  Samples
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Kanghyun Choi
Deokki Hong
Noseong Park
Youngsok Kim
Jinho Lee
MQ
71
67
0
04 Nov 2021
BNAS v2: Learning Architectures for Binary Networks with Empirical
  Improvements
BNAS v2: Learning Architectures for Binary Networks with Empirical Improvements
Dahyun Kim
Kunal Pratap Singh
Jonghyun Choi
MQ
117
7
0
16 Oct 2021
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
Yue Liu
Xinyang Jiang
Donglin Bai
Yuge Zhang
Ningxin Zheng
Xuanyi Dong
Lu Liu
Yuqing Yang
Dongsheng Li
73
10
0
30 Aug 2021
Dynamic Network Quantization for Efficient Video Inference
Dynamic Network Quantization for Efficient Video Inference
Ximeng Sun
Yikang Shen
Chun-Fu Chen
A. Oliva
Rogerio Feris
Kate Saenko
91
46
0
23 Aug 2021
How Do Adam and Training Strategies Help BNNs Optimization?
How Do Adam and Training Strategies Help BNNs Optimization?
Zechun Liu
Zhiqiang Shen
Shichao Li
K. Helwegen
Dong Huang
Kwang-Ting Cheng
ODLMQ
83
86
0
21 Jun 2021
Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Pareto-Optimal Quantized ResNet Is Mostly 4-bit
AmirAli Abdolrashidi
Lisa Wang
Shivani Agrawal
J. Malmaud
Oleg Rybakov
Chas Leichner
Lukasz Lew
MQ
71
36
0
07 May 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAMLMQ
72
18
0
16 Apr 2021
Network Quantization with Element-wise Gradient Scaling
Network Quantization with Element-wise Gradient Scaling
Junghyup Lee
Dohyung Kim
Bumsub Ham
MQ
91
120
0
02 Apr 2021
Charged particle tracking via edge-classifying interaction networks
Charged particle tracking via edge-classifying interaction networks
G. Dezoort
S. Thais
Javier Mauricio Duarte
Vesal Razavimaleki
M. Atkinson
I. Ojalvo
Mark S. Neubauer
P. Elmer
88
49
0
30 Mar 2021
ReCU: Reviving the Dead Weights in Binary Neural Networks
ReCU: Reviving the Dead Weights in Binary Neural Networks
Zihan Xu
Mingbao Lin
Jianzhuang Liu
Jie Chen
Ling Shao
Yue Gao
Yonghong Tian
Rongrong Ji
MQ
84
84
0
23 Mar 2021
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Kohei Yamamoto
MQ
95
68
0
12 Mar 2021
Ps and Qs: Quantization-aware pruning for efficient low latency neural
  network inference
Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
B. Hawks
Javier Mauricio Duarte
Nicholas J. Fraser
Alessandro Pappalardo
N. Tran
Yaman Umuroglu
MQ
84
51
0
22 Feb 2021
Collaborative Intelligence: Challenges and Opportunities
Collaborative Intelligence: Challenges and Opportunities
Ivan V. Bajić
Weisi Lin
Yonghong Tian
53
53
0
13 Feb 2021
Distribution Adaptive INT8 Quantization for Training CNNs
Distribution Adaptive INT8 Quantization for Training CNNs
Kang Zhao
Sida Huang
Pan Pan
Yinghan Li
Yingya Zhang
Zhenyu Gu
Yinghui Xu
MQ
114
68
0
09 Feb 2021
Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Hamzah Abdel-Aziz
Ali Shafiee
J. Shin
A. Pedram
Joseph Hassoun
MQ
74
11
0
27 Jan 2021
FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with
  Fractional Activations
FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations
Yichi Zhang
Junhao Pan
Xinheng Liu
Hongzheng Chen
Deming Chen
Zhiru Zhang
MQ
109
96
0
22 Dec 2020
PAMS: Quantized Super-Resolution via Parameterized Max Scale
PAMS: Quantized Super-Resolution via Parameterized Max Scale
Huixia Li
Chenqian Yan
Shaohui Lin
Xiawu Zheng
Yuchao Li
Baochang Zhang
Fan Yang
Rongrong Ji
MQ
76
86
0
09 Nov 2020
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Joint Pruning & Quantization for Extremely Sparse Neural Networks
Po-Hsiang Yu
Sih-Sian Wu
Jan P. Klopp
Liang-Gee Chen
Shao-Yi Chien
MQ
79
16
0
05 Oct 2020
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized
  Deep Neural Networks
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Yoonho Boo
Sungho Shin
Jungwook Choi
Wonyong Sung
MQ
83
30
0
30 Sep 2020
Fast Implementation of 4-bit Convolutional Neural Networks for Mobile
  Devices
Fast Implementation of 4-bit Convolutional Neural Networks for Mobile Devices
A. Trusov
E. Limonova
Dmitry Slugin
D. Nikolaev
V. Arlazarov
MQ
86
17
0
14 Sep 2020
FATNN: Fast and Accurate Ternary Neural Networks
FATNN: Fast and Accurate Ternary Neural Networks
Peng Chen
Bohan Zhuang
Chunhua Shen
MQ
52
15
0
12 Aug 2020
PROFIT: A Novel Training Method for sub-4-bit MobileNet Models
PROFIT: A Novel Training Method for sub-4-bit MobileNet Models
Eunhyeok Park
S. Yoo
MQ
64
85
0
11 Aug 2020
AQD: Towards Accurate Fully-Quantized Object Detection
AQD: Towards Accurate Fully-Quantized Object Detection
Peng Chen
Jing Liu
Bohan Zhuang
Mingkui Tan
Chunhua Shen
MQ
95
9
0
14 Jul 2020
Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection
Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection
Cong Chen
Shouyang Dong
Ye Tian
K. Cao
Li Liu
Yuanhao Guo
67
28
0
13 Jul 2020
Distillation Guided Residual Learning for Binary Convolutional Neural
  Networks
Distillation Guided Residual Learning for Binary Convolutional Neural Networks
Jianming Ye
Shiliang Zhang
Jingdong Wang
MQ
124
19
0
10 Jul 2020
EasyQuant: Post-training Quantization via Scale Optimization
EasyQuant: Post-training Quantization via Scale Optimization
Di Wu
Qingming Tang
Yongle Zhao
Ming Zhang
Ying Fu
Debing Zhang
MQ
84
78
0
30 Jun 2020
Automatic heterogeneous quantization of deep neural networks for
  low-latency inference on the edge for particle detectors
Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
C. Coelho
Aki Kuusela
Shane Li
Zhuang Hao
T. Aarrestad
Vladimir Loncar
J. Ngadiuba
M. Pierini
Adrian Alan Pol
S. Summers
MQ
106
179
0
15 Jun 2020
Role-Wise Data Augmentation for Knowledge Distillation
Role-Wise Data Augmentation for Knowledge Distillation
Jie Fu
Xue Geng
Zhijian Duan
Bohan Zhuang
Xingdi Yuan
Adam Trischler
Jie Lin
C. Pal
Hao Dong
72
15
0
19 Apr 2020
Rethinking Differentiable Search for Mixed-Precision Neural Networks
Rethinking Differentiable Search for Mixed-Precision Neural Networks
Zhaowei Cai
Nuno Vasconcelos
MQ
49
126
0
13 Apr 2020
From Quantized DNNs to Quantizable DNNs
From Quantized DNNs to Quantizable DNNs
Kunyuan Du
Ya Zhang
Haibing Guan
MQ
58
3
0
11 Apr 2020
A Learning Framework for n-bit Quantized Neural Networks toward FPGAs
A Learning Framework for n-bit Quantized Neural Networks toward FPGAs
Jun Chen
Lu Liu
Yong Liu
Xianfang Zeng
MQ
93
29
0
06 Apr 2020
12
Next