Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.07724
Cited By
Speeding up Convolutional Neural Networks By Exploiting the Sparsity of Rectifier Units
25 April 2017
S. Shi
Xiaowen Chu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Speeding up Convolutional Neural Networks By Exploiting the Sparsity of Rectifier Units"
9 / 9 papers shown
Title
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Muyang Li
Ji Lin
Chenlin Meng
Stefano Ermon
Song Han
Jun-Yan Zhu
DiffM
40
45
0
03 Nov 2022
Dual-side Sparse Tensor Core
Yang-Feng Wang
Chen Zhang
Zhiqiang Xie
Cong Guo
Yunxin Liu
Jingwen Leng
25
75
0
20 May 2021
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
59
82
0
02 Jul 2020
Efficient Sparse-Dense Matrix-Matrix Multiplication on GPUs Using the Customized Sparse Storage Format
S. Shi
Qiang-qiang Wang
Xiaowen Chu
16
10
0
29 May 2020
Sparse Deep Neural Network Graph Challenge
J. Kepner
Simon Alford
V. Gadepally
Michael Jones
Lauren Milechin
Ryan A. Robinett
S. Samsi
GNN
11
49
0
02 Sep 2019
Sparse Deep Neural Network Exact Solutions
J. Kepner
V. Gadepally
Hayden Jananthan
Lauren Milechin
S. Samsi
20
14
0
06 Jul 2018
Recurrent Residual Module for Fast Inference in Videos
Bowen Pan
Wuwei Lin
Xiaolin Fang
Chaoqin Huang
Bolei Zhou
Cewu Lu
ObjD
28
33
0
27 Feb 2018
Enabling Massive Deep Neural Networks with the GraphBLAS
J. Kepner
Manoj Kumar
José Moreira
P. Pattnaik
M. Serrano
H. Tufo
GNN
22
33
0
09 Aug 2017
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCV
BDL
23
479
0
24 May 2017
1