ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.07529
  4. Cited By
Accelerating Distributed Deep Learning using Lossless Homomorphic
  Compression

Accelerating Distributed Deep Learning using Lossless Homomorphic Compression

12 February 2024
Haoyu Li
Yuchen Xu
Jiayi Chen
Rohit Dwivedula
Wenfei Wu
Keqiang He
Aditya Akella
Daehyeok Kim
    FedML
    AI4CE
ArXivPDFHTML

Papers citing "Accelerating Distributed Deep Learning using Lossless Homomorphic Compression"

2 / 2 papers shown
Title
TAGC: Optimizing Gradient Communication in Distributed Transformer Training
TAGC: Optimizing Gradient Communication in Distributed Transformer Training
Igor Polyakov
Alexey Dukhanov
Egor Spirin
46
0
0
08 Apr 2025
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural
  Network Training
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training
Liang Luo
Jacob Nelson
Luis Ceze
Amar Phanishayee
Arvind Krishnamurthy
76
120
0
21 May 2018
1