Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.07529
Cited By
Accelerating Distributed Deep Learning using Lossless Homomorphic Compression
12 February 2024
Haoyu Li
Yuchen Xu
Jiayi Chen
Rohit Dwivedula
Wenfei Wu
Keqiang He
Aditya Akella
Daehyeok Kim
FedML
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Accelerating Distributed Deep Learning using Lossless Homomorphic Compression"
2 / 2 papers shown
Title
TAGC: Optimizing Gradient Communication in Distributed Transformer Training
Igor Polyakov
Alexey Dukhanov
Egor Spirin
46
0
0
08 Apr 2025
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training
Liang Luo
Jacob Nelson
Luis Ceze
Amar Phanishayee
Arvind Krishnamurthy
76
120
0
21 May 2018
1