ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.02983
  4. Cited By
Scaling Deep Learning on GPU and Knights Landing clusters

Scaling Deep Learning on GPU and Knights Landing clusters

9 August 2017
Yang You
A. Buluç
J. Demmel
    GNN
ArXivPDFHTML

Papers citing "Scaling Deep Learning on GPU and Knights Landing clusters"

11 / 11 papers shown
Title
Towards Efficient Communications in Federated Learning: A Contemporary
  Survey
Towards Efficient Communications in Federated Learning: A Contemporary Survey
Zihao Zhao
Yuzhu Mao
Yang Liu
Linqi Song
Ouyang Ye
Xinlei Chen
Wenbo Ding
FedML
66
60
0
02 Aug 2022
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Sicong Zhuang
Cristiano Malossi
Marc Casas
27
0
0
05 Apr 2020
Communication Contention Aware Scheduling of Multiple Deep Learning
  Training Jobs
Communication Contention Aware Scheduling of Multiple Deep Learning Training Jobs
Qiang-qiang Wang
S. Shi
Canhui Wang
Xiaowen Chu
24
13
0
24 Feb 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An
  Online Learning Approach
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
35
175
0
14 Jan 2020
A Survey on Distributed Machine Learning
A Survey on Distributed Machine Learning
Joost Verbraeken
Matthijs Wolting
Jonathan Katzy
Jeroen Kloppenburg
Tim Verbelen
Jan S. Rellermeyer
OOD
42
692
0
20 Dec 2019
MG-WFBP: Merging Gradients Wisely for Efficient Communication in
  Distributed Deep Learning
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
S. Shi
Xiaowen Chu
Bo Li
FedML
28
25
0
18 Dec 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep
  Learning with Convergence Guarantees
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
19
22
0
20 Nov 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
GeneSys: Enabling Continuous Learning through Neural Network Evolution
  in Hardware
GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware
A. Samajdar
Parth Mannan
K. Garg
T. Krishna
32
20
0
03 Aug 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1