ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1507.01239
  4. Cited By
Experiments on Parallel Training of Deep Neural Network using Model
  Averaging

Experiments on Parallel Training of Deep Neural Network using Model Averaging

5 July 2015
Hang Su
Haoyu Chen
    MoMe
    FedML
ArXivPDFHTML

Papers citing "Experiments on Parallel Training of Deep Neural Network using Model Averaging"

16 / 16 papers shown
Title
Param$Δ$ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost
ParamΔΔΔ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost
Sheng Cao
Mingrui Wu
Karthik Prasad
Yuandong Tian
Zechun Liu
MoMe
85
0
0
23 Apr 2025
1st Place Solution to Odyssey Emotion Recognition Challenge Task1:
  Tackling Class Imbalance Problem
1st Place Solution to Odyssey Emotion Recognition Challenge Task1: Tackling Class Imbalance Problem
Mingjie Chen
Hezhao Zhang
Yuanchao Li
Jiachen Luo
Wen Wu
...
Lin Wang
P. Woodland
Xie Chen
Huy P Phan
Thomas Hain
33
0
0
30 May 2024
Trade-offs of Local SGD at Scale: An Empirical Study
Trade-offs of Local SGD at Scale: An Empirical Study
Jose Javier Gonzalez Ortiz
Jonathan Frankle
Michael G. Rabbat
Ari S. Morcos
Nicolas Ballas
FedML
43
19
0
15 Oct 2021
Federated Deep AUC Maximization for Heterogeneous Data with a Constant
  Communication Complexity
Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity
Zhuoning Yuan
Zhishuai Guo
Yi Tian Xu
Yiming Ying
Tianbao Yang
FedML
21
35
0
09 Feb 2021
CLAN: Continuous Learning using Asynchronous Neuroevolution on Commodity
  Edge Devices
CLAN: Continuous Learning using Asynchronous Neuroevolution on Commodity Edge Devices
Parth Mannan
A. Samajdar
T. Krishna
31
2
0
27 Aug 2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
22
11
0
23 Jul 2020
Communication-Efficient Distributed Stochastic AUC Maximization with
  Deep Neural Networks
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
33
42
0
05 May 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Variance Reduced Local SGD with Lower Communication Complexity
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
42
152
0
30 Dec 2019
Split Learning for collaborative deep learning in healthcare
Split Learning for collaborative deep learning in healthcare
M. Poirot
Praneeth Vepakomma
Ken Chang
Jayashree Kalpathy-Cramer
Rajiv Gupta
Ramesh Raskar
FedML
OOD
34
134
0
27 Dec 2019
On the Convergence of Local Descent Methods in Federated Learning
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
21
267
0
31 Oct 2019
Adaptive Communication Strategies to Achieve the Best Error-Runtime
  Trade-off in Local-Update SGD
Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Jianyu Wang
Gauri Joshi
FedML
33
232
0
19 Oct 2018
Collaborative Deep Learning Across Multiple Data Centers
Collaborative Deep Learning Across Multiple Data Centers
Kele Xu
Haibo Mi
Dawei Feng
Huaimin Wang
Chuan Chen
Zibin Zheng
Xu Lan
FedML
140
18
0
16 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
33
348
0
22 Aug 2018
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
C. Hegde
S. Sarkar
FedML
32
179
0
23 Jun 2017
Empirical Evaluation of Parallel Training Algorithms on Acoustic
  Modeling
Empirical Evaluation of Parallel Training Algorithms on Acoustic Modeling
Wenpeng Li
BinBin Zhang
Lei Xie
Dong Yu
24
5
0
17 Mar 2017
1