ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.04386
  4. Cited By
Harnessing Manycore Processors with Distributed Memory for Accelerated
  Training of Sparse and Recurrent Models

Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models

7 November 2023
Jan Finkbeiner
Thomas Gmeinder
M. Pupilli
A. Titterton
Emre Neftci
ArXivPDFHTML

Papers citing "Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models"

3 / 3 papers shown
Title
A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
Jamie Lohoff
Anil Kaya
Florian Assmuth
Emre Neftci
40
0
0
20 Jan 2025
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
144
685
0
31 Jan 2021
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1