ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.06160
  4. Cited By
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low
  Bitwidth Gradients

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

20 June 2016
Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
    MQ
ArXivPDFHTML

Papers citing "DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients"

50 / 444 papers shown
Title
Learnable Mixed-precision and Dimension Reduction Co-design for
  Low-storage Activation
Learnable Mixed-precision and Dimension Reduction Co-design for Low-storage Activation
Yu-Shan Tai
Cheng-Yang Chang
Chieh-Fang Teng
AnYeu
A. Wu
35
5
0
16 Jul 2022
Lipschitz Continuity Retained Binary Neural Network
Lipschitz Continuity Retained Binary Neural Network
Yuzhang Shang
Dan Xu
Bin Duan
Ziliang Zong
Liqiang Nie
Yan Yan
18
19
0
13 Jul 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
57
97
0
04 Jul 2022
QReg: On Regularization Effects of Quantization
QReg: On Regularization Effects of Quantization
Mohammadhossein Askarihemmat
Reyhane Askari Hemmat
Alexander Hoffman
Ivan Lazarevich
Ehsan Saboori
Olivier Mastropietro
Sudhakar Sah
Yvon Savaria
J. David
MQ
44
5
0
24 Jun 2022
Fast Lossless Neural Compression with Integer-Only Discrete Flows
Fast Lossless Neural Compression with Integer-Only Discrete Flows
Siyu Wang
Jianfei Chen
Chongxuan Li
Jun Zhu
Bo Zhang
MQ
26
7
0
17 Jun 2022
Optimal Clipping and Magnitude-aware Differentiation for Improved
  Quantization-aware Training
Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Charbel Sakr
Steve Dai
Rangharajan Venkatesan
B. Zimmer
W. Dally
Brucek Khailany
MQ
27
41
0
13 Jun 2022
SDQ: Stochastic Differentiable Quantization with Mixed Precision
SDQ: Stochastic Differentiable Quantization with Mixed Precision
Xijie Huang
Zhiqiang Shen
Shichao Li
Zechun Liu
Xianghong Hu
Jeffry Wicaksana
Eric P. Xing
Kwang-Ting Cheng
MQ
34
33
0
09 Jun 2022
8-bit Numerical Formats for Deep Neural Networks
8-bit Numerical Formats for Deep Neural Networks
Badreddine Noune
Philip Jones
Daniel Justus
Dominic Masters
Carlo Luschi
MQ
23
34
0
06 Jun 2022
GAAF: Searching Activation Functions for Binary Neural Networks through
  Genetic Algorithm
GAAF: Searching Activation Functions for Binary Neural Networks through Genetic Algorithm
Yanfei Li
Tong Geng
S. Stein
Ang Li
Hui-Ling Yu
MQ
31
8
0
05 Jun 2022
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Y. Fu
Haichuan Yang
Jiayi Yuan
Meng Li
Cheng Wan
Raghuraman Krishnamoorthi
Vikas Chandra
Yingyan Lin
41
19
0
02 Jun 2022
HyBNN and FedHyBNN: (Federated) Hybrid Binary Neural Networks
HyBNN and FedHyBNN: (Federated) Hybrid Binary Neural Networks
Kinshuk Dua
FedML
MQ
29
0
0
19 May 2022
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Haoran You
Baopu Li
Huihong Shi
Y. Fu
Yingyan Lin
52
17
0
17 May 2022
Binarizing by Classification: Is soft function really necessary?
Binarizing by Classification: Is soft function really necessary?
Yefei He
Luoming Zhang
Weijia Wu
Hong Zhou
MQ
30
3
0
16 May 2022
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training
  Quantization
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training Quantization
Hongyi Yao
Pu Li
Jian Cao
Xiangcheng Liu
Chenying Xie
Bin Wang
MQ
32
12
0
26 Apr 2022
Compact Model Training by Low-Rank Projection with Energy Transfer
Compact Model Training by Low-Rank Projection with Energy Transfer
K. Guo
Zhenquan Lin
Xiaofen Xing
Fang Liu
Xiangmin Xu
40
2
0
12 Apr 2022
E^2TAD: An Energy-Efficient Tracking-based Action Detector
E^2TAD: An Energy-Efficient Tracking-based Action Detector
Xin Hu
Zhenyu Wu
Haoyuan Miao
Siqi Fan
Taiyu Long
...
Pengcheng Pi
Yi Wu
Zhou Ren
Zhangyang Wang
G. Hua
29
2
0
09 Apr 2022
Bimodal Distributed Binarized Neural Networks
Bimodal Distributed Binarized Neural Networks
T. Rozen
Moshe Kimhi
Brian Chmiel
A. Mendelson
Chaim Baskin
MQ
74
4
0
05 Apr 2022
FxP-QNet: A Post-Training Quantizer for the Design of Mixed
  Low-Precision DNNs with Dynamic Fixed-Point Representation
FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation
Ahmad Shawahna
S. M. Sait
A. El-Maleh
Irfan Ahmad
MQ
25
7
0
22 Mar 2022
Structured Pruning is All You Need for Pruning CNNs at Initialization
Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai
Weizhe Hua
Hongzheng Chen
G. E. Suh
Christopher De Sa
Zhiru Zhang
CVBM
49
14
0
04 Mar 2022
Standard Deviation-Based Quantization for Deep Neural Networks
Standard Deviation-Based Quantization for Deep Neural Networks
Amir Ardakani
A. Ardakani
B. Meyer
J. Clark
W. Gross
MQ
58
1
0
24 Feb 2022
Bitwidth Heterogeneous Federated Learning with Progressive Weight
  Dequantization
Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization
Jaehong Yoon
Geondo Park
Wonyong Jeong
Sung Ju Hwang
FedML
32
19
0
23 Feb 2022
Quantune: Post-training Quantization of Convolutional Neural Networks
  using Extreme Gradient Boosting for Fast Deployment
Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Jemin Lee
Misun Yu
Yongin Kwon
Teaho Kim
MQ
30
17
0
10 Feb 2022
Binary Neural Networks as a general-propose compute paradigm for
  on-device computer vision
Binary Neural Networks as a general-propose compute paradigm for on-device computer vision
Guhong Nie
Lirui Xiao
Menglong Zhu
Dongliang Chu
Yue-Hong Shen
Peng Li
Kan Yang
Li Du
Bo Chen Dji Innovations Inc
MQ
34
5
0
08 Feb 2022
Post-training Quantization for Neural Networks with Provable Guarantees
Post-training Quantization for Neural Networks with Provable Guarantees
Jinjie Zhang
Yixuan Zhou
Rayan Saab
MQ
28
32
0
26 Jan 2022
HEAM: High-Efficiency Approximate Multiplier Optimization for Deep
  Neural Networks
HEAM: High-Efficiency Approximate Multiplier Optimization for Deep Neural Networks
Su Zheng
Zhen Li
Yao Lu
Jingbo Gao
Jide Zhang
Lingli Wang
17
5
0
20 Jan 2022
Hardware-Efficient Deconvolution-Based GAN for Edge Computing
Hardware-Efficient Deconvolution-Based GAN for Edge Computing
A. Alhussain
Mingjie Lin
38
5
0
18 Jan 2022
Sub-mW Keyword Spotting on an MCU: Analog Binary Feature Extraction and
  Binary Neural Networks
Sub-mW Keyword Spotting on an MCU: Analog Binary Feature Extraction and Binary Neural Networks
G. Cerutti
Lukas Cavigelli
Renzo Andri
Michele Magno
Elisabetta Farella
Luca Benini
26
14
0
10 Jan 2022
An Empirical Study of Adder Neural Networks for Object Detection
An Empirical Study of Adder Neural Networks for Object Detection
Xinghao Chen
Chang Xu
Minjing Dong
Chunjing Xu
Yunhe Wang
91
15
0
27 Dec 2021
Training Quantized Deep Neural Networks via Cooperative Coevolution
Training Quantized Deep Neural Networks via Cooperative Coevolution
Fu Peng
Shengcai Liu
Ning Lu
Ke Tang
MQ
26
1
0
23 Dec 2021
Elastic-Link for Binarized Neural Network
Elastic-Link for Binarized Neural Network
Jie Hu
Ziheng Wu
Vince Tan
Zhilin Lu
Mengze Zeng
Enhua Wu
MQ
30
6
0
19 Dec 2021
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based
  Heterogeneous Computing Cores
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Yu Gong
Zhihang Xu
Zhezhi He
Weifeng Zhang
Xiaobing Tu
Xiaoyao Liang
Li Jiang
33
13
0
15 Dec 2021
Neural Network Quantization for Efficient Inference: A Survey
Neural Network Quantization for Efficient Inference: A Survey
Olivia Weng
MQ
28
23
0
08 Dec 2021
PokeBNN: A Binary Pursuit of Lightweight Accuracy
PokeBNN: A Binary Pursuit of Lightweight Accuracy
Yichi Zhang
Zhiru Zhang
Lukasz Lew
MQ
45
57
0
30 Nov 2021
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via
  Generalized Straight-Through Estimation
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation
Zechun Liu
Kwang-Ting Cheng
Dong Huang
Eric P. Xing
Zhiqiang Shen
MQ
25
104
0
29 Nov 2021
FQ-ViT: Post-Training Quantization for Fully Quantized Vision
  Transformer
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Yang Lin
Tianyu Zhang
Peiqin Sun
Zheng Li
Shuchang Zhou
ViT
MQ
21
147
0
27 Nov 2021
Sharpness-aware Quantization for Deep Neural Networks
Sharpness-aware Quantization for Deep Neural Networks
Jing Liu
Jianfei Cai
Bohan Zhuang
MQ
40
24
0
24 Nov 2021
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
34
0
0
19 Nov 2021
Iterative Training: Finding Binary Weight Deep Neural Networks with
  Layer Binarization
Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization
Cheng-Chou Lan
MQ
22
0
0
13 Nov 2021
Qimera: Data-free Quantization with Synthetic Boundary Supporting
  Samples
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Kanghyun Choi
Deokki Hong
Noseong Park
Youngsok Kim
Jinho Lee
MQ
29
64
0
04 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
30
1
0
01 Nov 2021
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization
  framework for FPGA
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
Sung-En Chang
Yanyu Li
Mengshu Sun
Yanzhi Wang
Xue Lin
MQ
11
1
0
30 Oct 2021
MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Ji Lin
Wei-Ming Chen
Han Cai
Chuang Gan
Song Han
50
156
0
28 Oct 2021
Demystifying and Generalizing BinaryConnect
Demystifying and Generalizing BinaryConnect
Abhishek Sharma
Yaoliang Yu
Eyyub Sari
Mahdi Zolnouri
V. Nia
MQ
24
8
0
25 Oct 2021
A Layer-wise Adversarial-aware Quantization Optimization for Improving
  Robustness
A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness
Chang Song
Riya Ranjan
H. Li
MQ
21
4
0
23 Oct 2021
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
  Neural Networks
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks
Yikai Wang
Yi Yang
Gang Hua
Anbang Yao
MQ
29
15
0
18 Oct 2021
BNAS v2: Learning Architectures for Binary Networks with Empirical
  Improvements
BNAS v2: Learning Architectures for Binary Networks with Empirical Improvements
Dahyun Kim
Kunal Pratap Singh
Jonghyun Choi
MQ
49
7
0
16 Oct 2021
Towards Mixed-Precision Quantization of Neural Networks via Constrained
  Optimization
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization
Weihan Chen
Peisong Wang
Jian Cheng
MQ
49
62
0
13 Oct 2021
Dynamic Binary Neural Network by learning channel-wise thresholds
Dynamic Binary Neural Network by learning channel-wise thresholds
Jiehua Zhang
Z. Su
Yang Feng
Xin Lu
M. Pietikäinen
Li Liu
MQ
19
18
0
08 Oct 2021
Understanding and Overcoming the Challenges of Efficient Transformer
  Quantization
Understanding and Overcoming the Challenges of Efficient Transformer Quantization
Yelysei Bondarenko
Markus Nagel
Tijmen Blankevoort
MQ
25
133
0
27 Sep 2021
Deep Structured Instance Graph for Distilling Object Detectors
Deep Structured Instance Graph for Distilling Object Detectors
Yixin Chen
Pengguang Chen
Shu Liu
Liwei Wang
Jiaya Jia
ObjD
ISeg
23
12
0
27 Sep 2021
Previous
123456789
Next