ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.01981
  4. Cited By
CodedReduce: A Fast and Robust Framework for Gradient Aggregation in
  Distributed Learning

CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed Learning

6 February 2019
Amirhossein Reisizadeh
Saurav Prakash
Ramtin Pedarsani
A. Avestimehr
ArXivPDFHTML

Papers citing "CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed Learning"

3 / 3 papers shown
Title
Communication-Efficient Gradient Coding for Straggler Mitigation in
  Distributed Learning
Communication-Efficient Gradient Coding for Straggler Mitigation in Distributed Learning
S. Kadhe
O. O. Koyluoglu
Kannan Ramchandran
26
11
0
14 May 2020
Communication-Efficient Edge AI: Algorithms and Systems
Communication-Efficient Edge AI: Algorithms and Systems
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
17
326
0
22 Feb 2020
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
179
683
0
07 Dec 2010
1