Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.04386
Cited By
Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models
7 November 2023
Jan Finkbeiner
Thomas Gmeinder
M. Pupilli
A. Titterton
Emre Neftci
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models"
3 / 3 papers shown
Title
A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
Jamie Lohoff
Anil Kaya
Florian Assmuth
Emre Neftci
40
0
0
20 Jan 2025
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
144
685
0
31 Jan 2021
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1