ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.10015
  4. Cited By
AdderNet and its Minimalist Hardware Design for Energy-Efficient
  Artificial Intelligence

AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence

25 January 2021
Yunhe Wang
Mingqiang Huang
Kai Han
Hanting Chen
Wei Zhang
Chunjing Xu
Dacheng Tao
ArXivPDFHTML

Papers citing "AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence"

18 / 18 papers shown
Title
ShiftAddNet: A Hardware-Inspired Deep Network
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
103
77
0
24 Oct 2020
Kernel Based Progressive Distillation for Adder Neural Networks
Kernel Based Progressive Distillation for Adder Neural Networks
Yixing Xu
Chang Xu
Xinghao Chen
Wei Zhang
Chunjing Xu
Yunhe Wang
61
47
0
28 Sep 2020
AdderNet: Do We Really Need Multiplications in Deep Learning?
AdderNet: Do We Really Need Multiplications in Deep Learning?
Hanting Chen
Yunhe Wang
Chunjing Xu
Boxin Shi
Chao Xu
Qi Tian
Chang Xu
62
197
0
31 Dec 2019
Constructing Energy-efficient Mixed-precision Neural Networks through
  Principal Component Analysis for Edge Intelligence
Constructing Energy-efficient Mixed-precision Neural Networks through Principal Component Analysis for Edge Intelligence
I. Chakraborty
Deboleena Roy
Isha Garg
Aayush Ankit
Kaushik Roy
49
38
0
04 Jun 2019
DeepShift: Towards Multiplication-Less Neural Networks
DeepShift: Towards Multiplication-Less Neural Networks
Mostafa Elhoushi
Zihao Chen
F. Shafiq
Ye Tian
Joey Yiwei Li
MQ
63
97
0
30 May 2019
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
171
19,204
0
13 Jan 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
139
3,111
0
15 Dec 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
649
130,942
0
12 Jun 2017
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
1.1K
20,813
0
17 Apr 2017
In-Datacenter Performance Analysis of a Tensor Processing Unit
In-Datacenter Performance Analysis of a Tensor Processing Unit
N. Jouppi
C. Young
Nishant Patil
David Patterson
Gaurav Agrawal
...
Vijay Vasudevan
Richard Walter
Walter Wang
Eric Wilcox
Doe Hyun Yoon
223
4,626
0
16 Apr 2017
An OpenCL(TM) Deep Learning Accelerator on Arria 10
An OpenCL(TM) Deep Learning Accelerator on Arria 10
U. Aydonat
Shane O'Connell
D. Capalija
A. Ling
Gordon R. Chiu
BDL
AI4CE
68
240
0
13 Jan 2017
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
188
3,693
0
31 Aug 2016
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
MQ
159
4,350
0
16 Mar 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
139
7,465
0
24 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.1K
193,426
0
10 Dec 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
243
8,821
0
01 Oct 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
434
43,234
0
11 Feb 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.5K
100,213
0
04 Sep 2014
1