ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.11263
  4. Cited By
PULP-NN: Accelerating Quantized Neural Networks on Parallel
  Ultra-Low-Power RISC-V Processors

PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors

29 August 2019
Angelo Garofalo
Manuele Rusci
Francesco Conti
D. Rossi
Luca Benini
    MQ
ArXivPDFHTML

Papers citing "PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors"

21 / 21 papers shown
Title
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time
Matteo Risso
Luca Bompani
Daniele Jahier Pagliari
78
0
0
24 Feb 2025
MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from Microwatts to Megawatts for Sustainable AI
MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from Microwatts to Megawatts for Sustainable AI
Arya Tschand
Arun Tejusve Raghunath Rajan
S. Idgunji
Anirban Ghosh
J. Holleman
...
Rowan Taubitz
Sean Zhan
Scott Wasson
David Kanter
Vijay Janapa Reddi
89
3
0
15 Oct 2024
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network
  Inference On Microcontrollers
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Manuele Rusci
Alessandro Capotondi
Luca Benini
MQ
48
75
0
30 May 2019
HAQ: Hardware-Aware Automated Quantization with Mixed Precision
HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Kuan-Chieh Wang
Zhijian Liu
Chengyue Wu
Ji Lin
Song Han
MQ
95
876
0
21 Nov 2018
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary
  Neural Network Inference
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference
Francesco Conti
Pasquale Davide Schiavone
Luca Benini
39
109
0
09 Jul 2018
PACT: Parameterized Clipping Activation for Quantized Neural Networks
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Jungwook Choi
Zhuo Wang
Swagath Venkataramani
P. Chuang
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
38
945
0
16 May 2018
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Daniele Palossi
Antonio Loquercio
Francesco Conti
Eric Flamand
Davide Scaramuzza
Luca Benini
193
158
0
04 May 2018
CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs
CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs
Liangzhen Lai
Naveen Suda
Vikas Chandra
39
378
0
19 Jan 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
120
3,090
0
15 Dec 2017
NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN
  Inference Acceleration on Zynq SoCs
NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Paolo Meloni
Alessandro Capotondi
Gianfranco Deriu
Michele Brian
Francesco Conti
D. Rossi
L. Raffo
Luca Benini
33
51
0
04 Dec 2017
Minimum Energy Quantized Neural Networks
Minimum Energy Quantized Neural Networks
Bert Moons
Koen Goetschalckx
Nick Van Berckelaer
Marian Verhelst
MQ
37
123
0
01 Nov 2017
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
1.0K
20,692
0
17 Apr 2017
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient
  Near-Sensor Analytics
An IoT Endpoint System-on-Chip for Secure and Energy-Efficient Near-Sensor Analytics
Francesco Conti
R. Schilling
Pasquale Davide Schiavone
A. Pullini
D. Rossi
...
Michael Gautschi
Igor Loi
Germain Haugou
Stefan Mangard
Luca Benini
35
117
0
18 Dec 2016
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Yaman Umuroglu
Nicholas J. Fraser
Giulio Gambardella
Michaela Blott
Philip H. W. Leong
Magnus Jahre
K. Vissers
MQ
77
988
0
01 Dec 2016
Quantized Neural Networks: Training Neural Networks with Low Precision
  Weights and Activations
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
Itay Hubara
Matthieu Courbariaux
Daniel Soudry
Ran El-Yaniv
Yoshua Bengio
MQ
87
1,852
0
22 Sep 2016
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low
  Bitwidth Gradients
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
MQ
95
2,080
0
20 Jun 2016
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
MQ
129
4,342
0
16 Mar 2016
Binarized Neural Networks
Itay Hubara
Daniel Soudry
Ran El-Yaniv
MQ
112
1,349
0
08 Feb 2016
Origami: A 803 GOp/s/W Convolutional Network Accelerator
Origami: A 803 GOp/s/W Convolutional Network Accelerator
Lukas Cavigelli
Luca Benini
24
151
0
14 Dec 2015
Fixed Point Quantization of Deep Convolutional Networks
Fixed Point Quantization of Deep Convolutional Networks
D. Lin
S. Talathi
V. Annapureddy
MQ
71
814
0
19 Nov 2015
BinaryConnect: Training Deep Neural Networks with binary weights during
  propagations
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Matthieu Courbariaux
Yoshua Bengio
J. David
MQ
124
2,976
0
02 Nov 2015
1