ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.05849
  4. Cited By
XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional
  Neural Networks

XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks

5 March 2018
A. Bahou
G. Karunaratne
Renzo Andri
Lukas Cavigelli
Luca Benini
    MQ
ArXivPDFHTML

Papers citing "XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks"

8 / 8 papers shown
Title
COBRA: Algorithm-Architecture Co-optimized Binary Transformer Accelerator for Edge Inference
COBRA: Algorithm-Architecture Co-optimized Binary Transformer Accelerator for Edge Inference
Ye Qiao
Zhiheng Cheng
Yian Wang
Yifan Zhang
Yunzhe Deng
Sitao Huang
79
0
0
22 Apr 2025
A Configurable BNN ASIC using a Network of Programmable Threshold Logic
  Standard Cells
A Configurable BNN ASIC using a Network of Programmable Threshold Logic Standard Cells
Ankit Wagle
S. Khatri
S. Vrudhula
16
8
0
04 Apr 2021
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Muhammad Shafique
BDL
59
140
0
21 Dec 2020
Always-On 674uW @ 4GOP/s Error Resilient Binary Neural Networks with
  Aggressive SRAM Voltage Scaling on a 22nm IoT End-Node
Always-On 674uW @ 4GOP/s Error Resilient Binary Neural Networks with Aggressive SRAM Voltage Scaling on a 22nm IoT End-Node
Alfio Di Mauro
Francesco Conti
Pasquale Davide Schiavone
D. Rossi
Luca Benini
19
9
0
17 Jul 2020
Towards Fast and Energy-Efficient Binarized Neural Network Inference on
  FPGA
Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA
Cheng Fu
Shilin Zhu
Hao Su
Ching-En Lee
Jishen Zhao
MQ
25
31
0
04 Oct 2018
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary
  Neural Network Inference
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference
Francesco Conti
Pasquale Davide Schiavone
Luca Benini
32
108
0
09 Jul 2018
Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN
  Inference Engine
Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN Inference Engine
Renzo Andri
Lukas Cavigelli
D. Rossi
Luca Benini
MQ
24
19
0
05 Mar 2018
PBGen: Partial Binarization of Deconvolution-Based Generators for Edge
  Intelligence
PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence
Jinglan Liu
Jiaxin Zhang
Yukun Ding
Xiaowei Xu
Meng Jiang
Yiyu Shi
36
4
0
26 Feb 2018
1