ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.04293
  4. Cited By
Compiling Neural Networks for a Computational Memory Accelerator

Compiling Neural Networks for a Computational Memory Accelerator

5 March 2020
K. Kourtis
M. Dazzi
Nikolas Ioannou
Tobias Grosser
Abu Sebastian
E. Eleftheriou
ArXivPDFHTML

Papers citing "Compiling Neural Networks for a Computational Memory Accelerator"

11 / 11 papers shown
Title
5 Parallel Prism: A topology for pipelined implementations of
  convolutional neural networks using computational memory
5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
M. Dazzi
Abu Sebastian
P. Francese
Thomas Parnell
Luca Benini
E. Eleftheriou
GNN
20
8
0
08 Jun 2019
Stripe: Tensor Compilation via the Nested Polyhedral Model
Stripe: Tensor Compilation via the Nested Polyhedral Model
Tim Zerrell
J. Bruestle
31
32
0
14 Mar 2019
RxNN: A Framework for Evaluating Deep Neural Networks on Resistive
  Crossbars
RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars
Shubham Jain
Abhronil Sengupta
Kaushik Roy
A. Raghunathan
47
87
0
31 Aug 2018
Quantizing deep convolutional networks for efficient inference: A
  whitepaper
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
103
1,009
0
21 Jun 2018
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
Riyadh Baghdadi
Jessica Ray
Malek Ben Romdhane
Emanuele Del Sozzo
Abdurrahman Akkas
Yunming Zhang
Patricia Suriana
Shoaib Kamil
Saman P. Amarasinghe
15
255
0
27 Apr 2018
Tensor Comprehensions: Framework-Agnostic High-Performance Machine
  Learning Abstractions
Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions
Nicolas Vasilache
O. Zinenko
Theodoros Theodoridis
Priya Goyal
Zach DeVito
William S. Moses
Sven Verdoolaege
Andrew Adams
Albert Cohen
62
432
0
13 Feb 2018
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning
Tianqi Chen
T. Moreau
Ziheng Jiang
Lianmin Zheng
Eddie Q. Yan
...
Leyuan Wang
Yuwei Hu
Luis Ceze
Carlos Guestrin
Arvind Krishnamurthy
127
374
0
12 Feb 2018
Bridging the Gap Between Neural Networks and Neuromorphic Hardware with
  A Neural Network Compiler
Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Network Compiler
Yu Ji
Youhui Zhang
Wenguang Chen
Yuan Xie
44
56
0
15 Nov 2017
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks
A. Parashar
Minsoo Rhu
Anurag Mukkara
A. Puglielli
Rangharajan Venkatesan
Brucek Khailany
J. Emer
S. Keckler
W. Dally
58
1,122
0
23 May 2017
In-Datacenter Performance Analysis of a Tensor Processing Unit
In-Datacenter Performance Analysis of a Tensor Processing Unit
N. Jouppi
C. Young
Nishant Patil
David Patterson
Gaurav Agrawal
...
Vijay Vasudevan
Richard Walter
Walter Wang
Eric Wilcox
Doe Hyun Yoon
181
4,619
0
16 Apr 2017
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Vivienne Sze
Yu-hsin Chen
Tien-Ju Yang
J. Emer
AAML
3DV
94
3,002
0
27 Mar 2017
1