ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.00843
  4. Cited By
Exploring the Robustness of Decentralized Training for Large Language
  Models

Exploring the Robustness of Decentralized Training for Large Language Models

1 December 2023
Lin Lu
Chenxi Dai
Wangcheng Tao
Binhang Yuan
Yanan Sun
Pan Zhou
ArXivPDFHTML

Papers citing "Exploring the Robustness of Decentralized Training for Large Language Models"

4 / 4 papers shown
Title
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
Rafael Pinot
John Stephan
FedML
31
67
0
24 May 2022
Byzantine-Robust Federated Learning with Optimal Statistical Rates and
  Privacy Guarantees
Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees
Banghua Zhu
Lun Wang
Qi Pang
Shuai Wang
Jiantao Jiao
D. Song
Michael I. Jordan
FedML
98
30
0
24 May 2022
Varuna: Scalable, Low-cost Training of Massive Deep Learning Models
Varuna: Scalable, Low-cost Training of Massive Deep Learning Models
Sanjith Athlur
Nitika Saran
Muthian Sivathanu
Ramachandran Ramjee
Nipun Kwatra
GNN
31
80
0
07 Nov 2021
Chimera: Efficiently Training Large-Scale Neural Networks with
  Bidirectional Pipelines
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines
Shigang Li
Torsten Hoefler
GNN
AI4CE
LRM
77
131
0
14 Jul 2021
1