ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1612.07119
  4. Cited By
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

1 December 2016
Yaman Umuroglu
Nicholas J. Fraser
Giulio Gambardella
Michaela Blott
Philip H. W. Leong
Magnus Jahre
K. Vissers
    MQ
ArXivPDFHTML

Papers citing "FINN: A Framework for Fast, Scalable Binarized Neural Network Inference"

22 / 222 papers shown
Title
XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional
  Neural Networks
XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks
A. Bahou
G. Karunaratne
Renzo Andri
Lukas Cavigelli
Luca Benini
MQ
17
45
0
05 Mar 2018
Towards Ultra-High Performance and Energy Efficiency of Deep Learning
  Systems: An Algorithm-Hardware Co-Optimization Framework
Towards Ultra-High Performance and Energy Efficiency of Deep Learning Systems: An Algorithm-Hardware Co-Optimization Framework
Yanzhi Wang
Caiwen Ding
Zhe Li
Geng Yuan
Siyu Liao
...
Bo Yuan
Xuehai Qian
Jian Tang
Qinru Qiu
X. Lin
23
33
0
18 Feb 2018
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
Tianqi Chen
T. Moreau
Ziheng Jiang
Lianmin Zheng
Eddie Q. Yan
...
Leyuan Wang
Yuwei Hu
Luis Ceze
Carlos Guestrin
Arvind Krishnamurthy
44
374
0
12 Feb 2018
Recent Advances in Efficient Computation of Deep Convolutional Neural
  Networks
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks
Jian Cheng
Peisong Wang
Gang Li
Qinghao Hu
Hanqing Lu
32
3
0
03 Feb 2018
Automated flow for compressing convolution neural networks for efficient
  edge-computation with FPGA
Automated flow for compressing convolution neural networks for efficient edge-computation with FPGA
F. Shafiq
Takato Yamada
Antonio T. Vilchez
Sakyasingha Dasgupta
MQ
21
3
0
18 Dec 2017
Bit Fusion: Bit-Level Dynamically Composable Architecture for
  Accelerating Deep Neural Networks
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks
Hardik Sharma
Jongse Park
Naveen Suda
Liangzhen Lai
Benson Chau
Joo-Young Kim
Vikas Chandra
H. Esmaeilzadeh
MQ
29
486
0
05 Dec 2017
NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN
  Inference Acceleration on Zynq SoCs
NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Paolo Meloni
Alessandro Capotondi
Gianfranco Deriu
Michele Brian
Francesco Conti
D. Rossi
L. Raffo
Luca Benini
11
51
0
04 Dec 2017
Design Automation for Binarized Neural Networks: A Quantum Leap
  Opportunity?
Design Automation for Binarized Neural Networks: A Quantum Leap Opportunity?
Manuele Rusci
Lukas Cavigelli
Luca Benini
MQ
23
20
0
21 Nov 2017
Tactics to Directly Map CNN graphs on Embedded FPGAs
Tactics to Directly Map CNN graphs on Embedded FPGAs
K. Abdelouahab
Maxime Pelcat
Jocelyn Sérot
C. Bourrasset
F. Berry
Jocelyn Serot
13
61
0
20 Nov 2017
Apprentice: Using Knowledge Distillation Techniques To Improve
  Low-Precision Network Accuracy
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Asit K. Mishra
Debbie Marr
FedML
38
330
0
15 Nov 2017
ReBNet: Residual Binarized Neural Network
ReBNet: Residual Binarized Neural Network
M. Ghasemzadeh
Mohammad Samragh
F. Koushanfar
MQ
27
4
0
03 Nov 2017
The implementation of a Deep Recurrent Neural Network Language Model on
  a Xilinx FPGA
The implementation of a Deep Recurrent Neural Network Language Model on a Xilinx FPGA
Yufeng Hao
S. Quigley
35
16
0
26 Oct 2017
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced
  Regularization in Ternary Networks
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Julian Faraone
Nicholas J. Fraser
Giulio Gambardella
Michaela Blott
Philip H. W. Leong
MQ
UQCV
21
12
0
19 Sep 2017
Streamlined Deployment for Quantized Neural Networks
Streamlined Deployment for Quantized Neural Networks
Yaman Umuroglu
Magnus Jahre
MQ
21
36
0
12 Sep 2017
WRPN: Wide Reduced-Precision Networks
WRPN: Wide Reduced-Precision Networks
Asit K. Mishra
Eriko Nurvitadhi
Jeffrey J. Cook
Debbie Marr
MQ
39
266
0
04 Sep 2017
CirCNN: Accelerating and Compressing Deep Neural Networks Using
  Block-CirculantWeight Matrices
CirCNN: Accelerating and Compressing Deep Neural Networks Using Block-CirculantWeight Matrices
Caiwen Ding
Siyu Liao
Yanzhi Wang
Zhe Li
Ning Liu
...
Yipeng Zhang
Jian Tang
Qinru Qiu
X. Lin
Bo Yuan
GNN
24
259
0
29 Aug 2017
Streaming Architecture for Large-Scale Quantized Neural Networks on an
  FPGA-Based Dataflow Platform
Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform
Chaim Baskin
Natan Liss
Evgenii Zheltonozhskii
A. Bronstein
A. Mendelson
GNN
MQ
36
35
0
31 Jul 2017
Ternary Residual Networks
Ternary Residual Networks
Abhisek Kundu
K. Banerjee
Naveen Mellempudi
Dheevatsa Mudigere
Dipankar Das
Bharat Kaul
Pradeep Dubey
34
8
0
15 Jul 2017
Ternary Neural Networks with Fine-Grained Quantization
Ternary Neural Networks with Fine-Grained Quantization
Naveen Mellempudi
Abhisek Kundu
Dheevatsa Mudigere
Dipankar Das
Bharat Kaul
Pradeep Dubey
MQ
24
111
0
02 May 2017
Deep Reservoir Computing Using Cellular Automata
Deep Reservoir Computing Using Cellular Automata
Stefano Nichele
Andreas Molund
16
27
0
08 Mar 2017
Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
Naveen Mellempudi
Abhisek Kundu
Dipankar Das
Dheevatsa Mudigere
Bharat Kaul
MQ
35
30
0
31 Jan 2017
Scaling Binarized Neural Networks on Reconfigurable Logic
Scaling Binarized Neural Networks on Reconfigurable Logic
Nicholas J. Fraser
Yaman Umuroglu
Giulio Gambardella
Michaela Blott
Philip H. W. Leong
Magnus Jahre
K. Vissers
MQ
20
57
0
12 Jan 2017
Previous
12345