ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.01406
  4. Cited By
NullHop: A Flexible Convolutional Neural Network Accelerator Based on
  Sparse Representations of Feature Maps

NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

5 June 2017
Alessandro Aimar
Hesham Mostafa
Enrico Calabrese
A. Rios-Navarro
Ricardo Tapiador-Morales
Iulia-Alexandra Lungu
Moritz B. Milde
Federico Corradi
A. Linares-Barranco
Shih-Chii Liu
T. Delbruck
ArXivPDFHTML

Papers citing "NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps"

36 / 36 papers shown
Title
Towards Mobile Sensing with Event Cameras on High-agility Resource-constrained Devices: A Survey
Towards Mobile Sensing with Event Cameras on High-agility Resource-constrained Devices: A Survey
Haoyang Wang
Ruishan Guo
Pengtao Ma
Ciyu Ruan
Xinyu Luo
Wenhua Ding
Tianyang Zhong
Jingao Xu
Yunhao Liu
Xinlei Chen
52
0
0
29 Mar 2025
Co-designing a Sub-millisecond Latency Event-based Eye Tracking System
  with Submanifold Sparse CNN
Co-designing a Sub-millisecond Latency Event-based Eye Tracking System with Submanifold Sparse CNN
Baoheng Zhang
Yizhao Gao
Jingyuan Li
Hayden Kwok-Hay So
40
3
0
22 Apr 2024
Tensor Slicing and Optimization for Multicore NPUs
Tensor Slicing and Optimization for Multicore NPUs
R. Sousa
M. Pereira
Yongin Kwon
Taeho Kim
Namsoon Jung
Chang Soo Kim
Michael Frank
Guido Araujo
27
5
0
06 Apr 2023
Cross-Layer Design for AI Acceleration with Non-Coherent Optical
  Computing
Cross-Layer Design for AI Acceleration with Non-Coherent Optical Computing
Febin P. Sunny
Mahdi Nikdast
S. Pasricha
32
5
0
22 Mar 2023
Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for
  Video Recognition with Hierarchical Tucker Tensor Decomposition
Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Yu Gong
Miao Yin
Lingyi Huang
Chunhua Deng
Yang Sui
Bo Yuan
24
6
0
05 Dec 2022
Improved Projection Learning for Lower Dimensional Feature Maps
Improved Projection Learning for Lower Dimensional Feature Maps
Ilan Price
Jared Tanner
24
3
0
27 Oct 2022
SBPF: Sensitiveness Based Pruning Framework For Convolutional Neural
  Network On Image Classification
SBPF: Sensitiveness Based Pruning Framework For Convolutional Neural Network On Image Classification
Yihe Lu
Maoguo Gong
Wei Zhao
Kaiyuan Feng
Hao Li
VLM
29
0
0
09 Aug 2022
An Ultra-low Power TinyML System for Real-time Visual Processing at Edge
An Ultra-low Power TinyML System for Real-time Visual Processing at Edge
Kunran Xu
Huawei Zhang
Yishi Li
Yuhao Zhang
Rui Lai
Yi Liu
27
22
0
11 Jul 2022
Efficient Adaptive Federated Optimization of Federated Learning for IoT
Efficient Adaptive Federated Optimization of Federated Learning for IoT
Zunming Chen
Hongyan Cui
Ensen Wu
Yu Xi
27
0
0
23 Jun 2022
Multiply-and-Fire (MNF): An Event-driven Sparse Neural Network
  Accelerator
Multiply-and-Fire (MNF): An Event-driven Sparse Neural Network Accelerator
Miao Yu
Tingting Xiang
Venkata Pavan Kumar Miriyala
Trevor E. Carlson
20
1
0
20 Apr 2022
AEGNN: Asynchronous Event-based Graph Neural Networks
AEGNN: Asynchronous Event-based Graph Neural Networks
S. Schaefer
Daniel Gehrig
Davide Scaramuzza
GNN
26
142
0
31 Mar 2022
Exploiting Spatial Sparsity for Event Cameras with Visual Transformers
Exploiting Spatial Sparsity for Event Cameras with Visual Transformers
Zuowen Wang
Yuhuang Hu
Shih-Chii Liu
ViT
36
33
0
10 Feb 2022
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network
  Accelerators
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Lois Orosa
Skanda Koppula
Yaman Umuroglu
Konstantinos Kanellopoulos
Juan Gómez Luna
Michaela Blott
K. Vissers
O. Mutlu
46
4
0
04 Feb 2022
Two Sparsities Are Better Than One: Unlocking the Performance Benefits
  of Sparse-Sparse Networks
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
23
20
0
27 Dec 2021
Synapse Compression for Event-Based Convolutional-Neural-Network
  Accelerators
Synapse Compression for Event-Based Convolutional-Neural-Network Accelerators
Lennart Bamberg
Arash Pourtaherian
Luc Waeijen
A. Chahar
Orlando Moreira
12
4
0
13 Dec 2021
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting
  Spatio-Temporal Sparsity
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity
Chang Gao
T. Delbruck
Shih-Chii Liu
21
44
0
04 Aug 2021
ROBIN: A Robust Optical Binary Neural Network Accelerator
ROBIN: A Robust Optical Binary Neural Network Accelerator
Febin P. Sunny
Asif Mirza
Mahdi Nikdast
S. Pasricha
MQ
33
35
0
12 Jul 2021
Dual-side Sparse Tensor Core
Dual-side Sparse Tensor Core
Yang-Feng Wang
Chen Zhang
Zhiqiang Xie
Cong Guo
Yunxin Liu
Jingwen Leng
25
75
0
20 May 2021
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Muhammad Shafique
BDL
59
140
0
21 Dec 2020
Always-On 674uW @ 4GOP/s Error Resilient Binary Neural Networks with
  Aggressive SRAM Voltage Scaling on a 22nm IoT End-Node
Always-On 674uW @ 4GOP/s Error Resilient Binary Neural Networks with Aggressive SRAM Voltage Scaling on a 22nm IoT End-Node
Alfio Di Mauro
Francesco Conti
Pasquale Davide Schiavone
D. Rossi
Luca Benini
21
9
0
17 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
64
82
0
02 Jul 2020
Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays
Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays
Laurie Bose
Jianing Chen
S. Carey
Piotr Dudek
W. Mayol-Cuevas
22
37
0
27 Apr 2020
Computation on Sparse Neural Networks: an Inspiration for Future
  Hardware
Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Fei Sun
Minghai Qin
Tianyun Zhang
Liu Liu
Yen-kuang Chen
Yuan Xie
37
7
0
24 Apr 2020
Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators
Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators
T. Delbruck
Shih-Chii Liu
6
4
0
29 Mar 2020
Memory Organization for Energy-Efficient Learning and Inference in
  Digital Neuromorphic Accelerators
Memory Organization for Energy-Efficient Learning and Inference in Digital Neuromorphic Accelerators
Clemens J. S. Schaefer
Patrick Faley
Emre Neftci
S. Joshi
20
2
0
05 Mar 2020
Dynamic Vision Sensor integration on FPGA-based CNN accelerators for
  high-speed visual classification
Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification
A. Linares-Barranco
A. Rios-Navarro
Ricardo Tapiador-Morales
T. Delbruck
27
19
0
17 May 2019
NeuPart: Using Analytical Models to Drive Energy-Efficient Partitioning
  of CNN Computations on Cloud-Connected Mobile Clients
NeuPart: Using Analytical Models to Drive Energy-Efficient Partitioning of CNN Computations on Cloud-Connected Mobile Clients
Susmita Dey Manasi
F. S. Snigdha
S. Sapatnekar
34
16
0
09 May 2019
The importance of space and time in neuromorphic cognitive agents
The importance of space and time in neuromorphic cognitive agents
Giacomo Indiveri
Yulia Sandamirskaya
AI4CE
30
49
0
26 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
37
307
0
15 Feb 2019
CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional
  Network Inference on Video Streams
CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams
Lukas Cavigelli
Luca Benini
27
26
0
15 Aug 2018
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary
  Neural Network Inference
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference
Francesco Conti
Pasquale Davide Schiavone
Luca Benini
32
108
0
09 Jul 2018
Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN
  Inference Engine
Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN Inference Engine
Renzo Andri
Lukas Cavigelli
D. Rossi
Luca Benini
MQ
24
19
0
05 Mar 2018
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural
  Networks with Reduced Numerical Precision Weights and Activation
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation
Moritz B. Milde
Daniel Neil
Alessandro Aimar
T. Delbruck
Giacomo Indiveri
MQ
31
9
0
13 Nov 2017
CBinfer: Change-Based Inference for Convolutional Neural Networks on
  Video Data
CBinfer: Change-Based Inference for Convolutional Neural Networks on Video Data
Lukas Cavigelli
Philippe Degen
Luca Benini
BDL
25
51
0
14 Apr 2017
Delta Networks for Optimized Recurrent Network Computation
Delta Networks for Optimized Recurrent Network Computation
Daniel Neil
Junhaeng Lee
T. Delbruck
Shih-Chii Liu
33
66
0
16 Dec 2016
ENet: A Deep Neural Network Architecture for Real-Time Semantic
  Segmentation
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
Adam Paszke
Abhishek Chaurasia
Sangpil Kim
Eugenio Culurciello
SSeg
235
2,059
0
07 Jun 2016
1