ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.06629
  4. Cited By
LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and
  Inter-Layer Gradient Pipelining and Multiprocessor Scheduling

LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling

14 August 2021
Nanda K. Unnikrishnan
Keshab K. Parhi
    AI4CE
ArXivPDFHTML

Papers citing "LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling"

3 / 3 papers shown
Title
Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep
  Neural Networks
Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks
Minsoo Rhu
Mike O'Connor
Niladrish Chatterjee
Jeff Pool
S. Keckler
42
176
0
03 May 2017
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Vivienne Sze
Yu-hsin Chen
Tien-Ju Yang
J. Emer
AAML
3DV
94
3,002
0
27 Mar 2017
Decoupled Neural Interfaces using Synthetic Gradients
Decoupled Neural Interfaces using Synthetic Gradients
Max Jaderberg
Wojciech M. Czarnecki
Simon Osindero
Oriol Vinyals
Alex Graves
David Silver
Koray Kavukcuoglu
65
356
0
18 Aug 2016
1