Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.00748
Cited By
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
1 September 2020
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Omar Mohamed Awad
Gennady Pekhimenko
Jorge Albericio
Andreas Moshovos
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference"
7 / 7 papers shown
Title
DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning
Mohammad Hossein Samavatian
Saikat Majumdar
Kristin Barber
R. Teodorescu
AAML
28
2
0
31 Jul 2022
Energy awareness in low precision neural networks
Nurit Spingarn-Eliezer
Ron Banner
Elad Hoffer
Hilla Ben-Yaacov
T. Michaeli
41
0
0
06 Feb 2022
Accelerating DNN Training with Structured Data Gradient Pruning
Bradley McDanel
Helia Dinh
J. Magallanes
19
7
0
01 Feb 2022
BitTrain: Sparse Bitmap Compression for Memory-Efficient Training on the Edge
Abdelrahman I. Hosny
Marina Neseem
Sherief Reda
MQ
35
4
0
29 Oct 2021
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Zhi-Gang Liu
P. Whatmough
Yuhao Zhu
Matthew Mattina
MQ
19
75
0
16 Jul 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
43
380
0
17 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
304
6,996
0
20 Apr 2018
1