ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.02924
  4. Cited By
Scaling Distributed Training with Adaptive Summation

Scaling Distributed Training with Adaptive Summation

4 June 2020
Saeed Maleki
Madan Musuvathi
Todd Mytkowicz
Olli Saarikivi
Tianju Xu
Vadim Eksarevskiy
Jaliya Ekanayake
Emad Barsoum
ArXiv (abs)PDFHTML

Papers citing "Scaling Distributed Training with Adaptive Summation"

2 / 2 papers shown
Title
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware
  Communication Compression
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
74
28
0
24 Jan 2023
Distributed Training of Embeddings using Graph Analytics
Distributed Training of Embeddings using Graph Analytics
G. Gill
Roshan Dathathri
Saeed Maleki
Madan Musuvathi
Todd Mytkowicz
Olli Saarikivi The University of Texas at Austin
GNN
39
1
0
08 Sep 2019
1