ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.06160
  4. Cited By
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low
  Bitwidth Gradients

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

20 June 2016
Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
    MQ
ArXivPDFHTML

Papers citing "DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients"

50 / 444 papers shown
Title
OMPQ: Orthogonal Mixed Precision Quantization
OMPQ: Orthogonal Mixed Precision Quantization
Yuexiao Ma
Taisong Jin
Xiawu Zheng
Yan Wang
Huixia Li
Yongjian Wu
Guannan Jiang
Wei Zhang
Rongrong Ji
MQ
19
34
0
16 Sep 2021
Fast Federated Edge Learning with Overlapped Communication and
  Computation and Channel-Aware Fair Client Scheduling
Fast Federated Edge Learning with Overlapped Communication and Computation and Channel-Aware Fair Client Scheduling
M. E. Ozfatura
Junlin Zhao
Deniz Gündüz
37
15
0
14 Sep 2021
Complexity-aware Adaptive Training and Inference for Edge-Cloud
  Distributed AI Systems
Complexity-aware Adaptive Training and Inference for Edge-Cloud Distributed AI Systems
Yinghan Long
I. Chakraborty
G. Srinivasan
Kaushik Roy
30
14
0
14 Sep 2021
Elastic Significant Bit Quantization and Acceleration for Deep Neural
  Networks
Elastic Significant Bit Quantization and Acceleration for Deep Neural Networks
Cheng Gong
Ye Lu
Kunpeng Xie
Zongming Jin
Tao Li
Yanzhi Wang
MQ
29
7
0
08 Sep 2021
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
35
31
0
01 Sep 2021
Quantized Convolutional Neural Networks Through the Lens of Partial
  Differential Equations
Quantized Convolutional Neural Networks Through the Lens of Partial Differential Equations
Ido Ben-Yair
Gil Ben Shalom
Moshe Eliasof
Eran Treister
MQ
38
5
0
31 Aug 2021
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Amin Banitalebi-Dehkordi
Naveen Vedula
J. Pei
Fei Xia
Lanjun Wang
Yong Zhang
22
89
0
30 Aug 2021
An Information Theory-inspired Strategy for Automatic Network Pruning
An Information Theory-inspired Strategy for Automatic Network Pruning
Xiawu Zheng
Yuexiao Ma
Teng Xi
Gang Zhang
Errui Ding
Yuchao Li
Jie Chen
Yonghong Tian
Rongrong Ji
54
13
0
19 Aug 2021
Bias Loss for Mobile Neural Networks
Bias Loss for Mobile Neural Networks
L. Abrahamyan
Valentin Ziatchin
Yiming Chen
Nikos Deligiannis
20
14
0
23 Jul 2021
A High-Performance Adaptive Quantization Approach for Edge CNN
  Applications
A High-Performance Adaptive Quantization Approach for Edge CNN Applications
Hsu-Hsun Chin
R. Tsay
Hsin-I Wu
MQ
24
5
0
18 Jul 2021
Content-Aware Convolutional Neural Networks
Content-Aware Convolutional Neural Networks
Yong Guo
Yaofo Chen
Mingkui Tan
Kui Jia
Jian Chen
Jingdong Wang
36
8
0
30 Jun 2021
LNS-Madam: Low-Precision Training in Logarithmic Number System using
  Multiplicative Weight Update
LNS-Madam: Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao
Steve Dai
Rangharajan Venkatesan
Brian Zimmer
Mustafa Ali
Xuan Li
Brucek Khailany
B. Dally
Anima Anandkumar
MQ
39
13
0
26 Jun 2021
APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU
  Tensor Cores
APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU Tensor Cores
Boyuan Feng
Yuke Wang
Tong Geng
Ang Li
Yufei Ding
MQ
23
37
0
23 Jun 2021
How Do Adam and Training Strategies Help BNNs Optimization?
How Do Adam and Training Strategies Help BNNs Optimization?
Zechun Liu
Zhiqiang Shen
Shichao Li
K. Helwegen
Dong Huang
Kwang-Ting Cheng
ODL
MQ
25
83
0
21 Jun 2021
How Low Can We Go: Trading Memory for Error in Low-Precision Training
How Low Can We Go: Trading Memory for Error in Low-Precision Training
Chengrun Yang
Ziyang Wu
Jerry Chee
Christopher De Sa
Madeleine Udell
23
2
0
17 Jun 2021
A Winning Hand: Compressing Deep Networks Can Improve
  Out-Of-Distribution Robustness
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
James Diffenderfer
Brian Bartoldson
Shreya Chaganti
Jize Zhang
B. Kailkhura
OOD
31
69
0
16 Jun 2021
ShortcutFusion: From Tensorflow to FPGA-based accelerator with
  reuse-aware memory allocation for shortcut data
ShortcutFusion: From Tensorflow to FPGA-based accelerator with reuse-aware memory allocation for shortcut data
Duy-Thanh Nguyen
Hyeonseung Je
Tuan Nghia Nguyen
Soojung Ryu
Kyujoong Lee
Hyuk-Jae Lee
24
24
0
15 Jun 2021
BoolNet: Minimizing The Energy Consumption of Binary Neural Networks
BoolNet: Minimizing The Energy Consumption of Binary Neural Networks
Nianhui Guo
Joseph Bethge
Haojin Yang
Kai Zhong
Xuefei Ning
Christoph Meinel
Yu Wang
MQ
29
11
0
13 Jun 2021
Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators
Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators
Yonggan Fu
Yongan Zhang
Yang Zhang
David D. Cox
Yingyan Lin
MQ
58
18
0
11 Jun 2021
Post-Training Sparsity-Aware Quantization
Post-Training Sparsity-Aware Quantization
Gil Shomron
F. Gabbay
Samer Kurzum
U. Weiser
MQ
46
33
0
23 May 2021
Extremely Lightweight Quantization Robust Real-Time Single-Image Super
  Resolution for Mobile Devices
Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices
Mustafa Ayazoglu
8
57
0
21 May 2021
BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer
BatchQuant: Quantized-for-all Architecture Search with Robust Quantizer
Haoping Bai
Mengsi Cao
Ping Huang
Jiulong Shan
MQ
27
34
0
19 May 2021
In-Hindsight Quantization Range Estimation for Quantized Training
In-Hindsight Quantization Range Estimation for Quantized Training
Marios Fournarakis
Markus Nagel
MQ
22
10
0
10 May 2021
ActNN: Reducing Training Memory Footprint via 2-Bit Activation
  Compressed Training
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
Jianfei Chen
Lianmin Zheng
Z. Yao
Dequan Wang
Ion Stoica
Michael W. Mahoney
Joseph E. Gonzalez
MQ
32
74
0
29 Apr 2021
Quantization of Deep Neural Networks for Accurate Edge Computing
Quantization of Deep Neural Networks for Accurate Edge Computing
Wentao Chen
Hailong Qiu
Zhuang Jian
Chutong Zhang
Yu Hu
Qing Lu
Tianchen Wang
Yiyu Shi
Meiping Huang
Xiaowe Xu
52
21
0
25 Apr 2021
InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks
InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks
Yonggan Fu
Zhongzhi Yu
Yongan Zhang
Yi Ding
Chaojian Li
Yongyuan Liang
Mingchao Jiang
Zhangyang Wang
Yingyan Lin
28
3
0
22 Apr 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAML
MQ
26
18
0
16 Apr 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch
  Normalization
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
46
36
0
16 Apr 2021
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
Ben Graham
Alaaeldin El-Nouby
Hugo Touvron
Pierre Stock
Armand Joulin
Hervé Jégou
Matthijs Douze
ViT
24
775
0
02 Apr 2021
Training Multi-bit Quantized and Binarized Networks with A Learnable
  Symmetric Quantizer
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
Phuoc Pham
J. Abraham
Jaeyong Chung
MQ
53
12
0
01 Apr 2021
Charged particle tracking via edge-classifying interaction networks
Charged particle tracking via edge-classifying interaction networks
G. Dezoort
S. Thais
Javier Mauricio Duarte
Vesal Razavimaleki
M. Atkinson
I. Ojalvo
Mark S. Neubauer
P. Elmer
32
47
0
30 Mar 2021
ReCU: Reviving the Dead Weights in Binary Neural Networks
ReCU: Reviving the Dead Weights in Binary Neural Networks
Zihan Xu
Mingbao Lin
Jianzhuang Liu
Jie Chen
Ling Shao
Yue Gao
Yonghong Tian
Rongrong Ji
MQ
26
81
0
23 Mar 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
37
75
0
17 Mar 2021
Learned Gradient Compression for Distributed Deep Learning
Learned Gradient Compression for Distributed Deep Learning
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
40
46
0
16 Mar 2021
Learning Frequency Domain Approximation for Binary Neural Networks
Learning Frequency Domain Approximation for Binary Neural Networks
Yixing Xu
Kai Han
Chang Xu
Yehui Tang
Chunjing Xu
Yunhe Wang
MQ
27
54
0
01 Mar 2021
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network
  Quantization
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization
Huanrui Yang
Lin Duan
Yiran Chen
Hai Helen Li
MQ
21
64
0
20 Feb 2021
Task-oriented Communication Design in Cyber-Physical Systems: A Survey
  on Theory and Applications
Task-oriented Communication Design in Cyber-Physical Systems: A Survey on Theory and Applications
Arsham Mostaani
T. Vu
Shree Krishna Sharma
Van-Dinh Nguyen
Qi Liao
Symeon Chatzinotas
29
16
0
14 Feb 2021
Distribution Adaptive INT8 Quantization for Training CNNs
Distribution Adaptive INT8 Quantization for Training CNNs
Kang Zhao
Sida Huang
Pan Pan
Yinghan Li
Yingya Zhang
Zhenyu Gu
Yinghui Xu
MQ
30
63
0
09 Feb 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision
  Neural Network Inference
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
38
68
0
08 Feb 2021
Enabling Binary Neural Network Training on the Edge
Enabling Binary Neural Network Training on the Edge
Erwei Wang
James J. Davis
Daniele Moro
Piotr Zielinski
Jia Jie Lim
C. Coelho
S. Chatterjee
P. Cheung
George A. Constantinides
MQ
25
24
0
08 Feb 2021
Fixed-point Quantization of Convolutional Neural Networks for Quantized
  Inference on Embedded Platforms
Fixed-point Quantization of Convolutional Neural Networks for Quantized Inference on Embedded Platforms
Rishabh Goyal
Joaquin Vanschoren
V. V. Acht
S. Nijssen
MQ
35
23
0
03 Feb 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
678
0
24 Jan 2021
GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
Ying Nie
Kai Han
Zhenhua Liu
Chunjing Xu
Yunhe Wang
OOD
45
22
0
21 Jan 2021
Sound Event Detection with Binary Neural Networks on Tightly
  Power-Constrained IoT Devices
Sound Event Detection with Binary Neural Networks on Tightly Power-Constrained IoT Devices
G. Cerutti
Renzo Andri
Lukas Cavigelli
Michele Magno
Elisabetta Farella
Luca Benini
MQ
21
37
0
12 Jan 2021
I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
107
345
0
05 Jan 2021
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
145
221
0
31 Dec 2020
Direct Quantization for Training Highly Accurate Low Bit-width Deep
  Neural Networks
Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks
Ziquan Liu
Wuguannan Yao
Qiao Li
Antoni B. Chan
MQ
30
9
0
26 Dec 2020
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Y. Fu
Haoran You
Yang Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
MQ
38
32
0
24 Dec 2020
Adaptive Precision Training for Resource Constrained Devices
Adaptive Precision Training for Resource Constrained Devices
Tian Huang
Yaoyu Zhang
Qiufeng Wang
41
5
0
23 Dec 2020
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Mohamed Bennai
BDL
64
140
0
21 Dec 2020
Previous
123456789
Next