ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.08289
  4. Cited By
$\textbf{A}^2\textbf{CiD}^2$: Accelerating Asynchronous Communication in
  Decentralized Deep Learning
v1v2 (latest)

A2CiD2\textbf{A}^2\textbf{CiD}^2A2CiD2: Accelerating Asynchronous Communication in Decentralized Deep Learning

14 June 2023
Adel Nabli
Eugene Belilovsky
Edouard Oyallon
ArXiv (abs)PDFHTML

Papers citing "$\textbf{A}^2\textbf{CiD}^2$: Accelerating Asynchronous Communication in Decentralized Deep Learning"

5 / 5 papers shown
Title
Accelerating AllReduce with a Persistent Straggler
Accelerating AllReduce with a Persistent Straggler
Arjun Devraj
Eric Ding
Abhishek Vijaya Kumar
Robert Kleinberg
Rachee Singh
56
0
0
29 May 2025
Accelerating MoE Model Inference with Expert Sharding
Oana Balmau
Anne-Marie Kermarrec
Rafael Pires
André Loureiro Espírito Santo
M. Vos
Milos Vujasinovic
MoE
96
0
0
11 Mar 2025
DRACO: Decentralized Asynchronous Federated Learning over Row-Stochastic Wireless Networks
DRACO: Decentralized Asynchronous Federated Learning over Row-Stochastic Wireless Networks
Eunjeong Jeong
Marios Kountouris
105
1
0
19 Jun 2024
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling,
  then Average
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average
Louis Fournier
Adel Nabli
Masih Aminbeidokhti
M. Pedersoli
Eugene Belilovsky
Edouard Oyallon
MoMeFedML
95
3
0
27 May 2024
Meta-learning Optimizers for Communication-Efficient Learning
Meta-learning Optimizers for Communication-Efficient Learning
Charles-Étienne Joseph
Benjamin Thérien
A. Moudgil
Boris Knyazev
Eugene Belilovsky
137
2
0
02 Dec 2023
1