ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.09723
  4. Cited By
Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP

Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP

3 November 2022
Shiva Raj Pokhrel
Jinho Choi
A. Walid
ArXivPDFHTML

Papers citing "Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP"

11 / 11 papers shown
Title
Communication-Efficient and Distributed Learning Over Wireless Networks:
  Principles and Applications
Communication-Efficient and Distributed Learning Over Wireless Networks: Principles and Applications
Jihong Park
S. Samarakoon
Anis Elgabli
Joongheon Kim
M. Bennis
Seong-Lyun Kim
Mérouane Debbah
41
161
0
06 Aug 2020
Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
  Partitioned Edge Learning
Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of Partitioned Edge Learning
Dingzhu Wen
M. Bennis
Kaibin Huang
40
48
0
10 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
Shaoshuai Shi
Wei Wang
Yue Liu
Xiaowen Chu
37
49
0
10 Mar 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
64
12
0
06 Mar 2020
A Survey on Distributed Machine Learning
A Survey on Distributed Machine Learning
Joost Verbraeken
Matthijs Wolting
Jonathan Katzy
Jeroen Kloppenburg
Tim Verbelen
Jan S. Rellermeyer
OOD
66
699
0
20 Dec 2019
Federated Learning over Wireless Networks: Convergence Analysis and
  Resource Allocation
Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation
Canh T. Dinh
N. H. Tran
Minh N. H. Nguyen
Choong Seon Hong
Wei Bao
Albert Y. Zomaya
Vincent Gramoli
FedML
89
333
0
29 Oct 2019
Dynamic Stale Synchronous Parallel Distributed Training for Deep
  Learning
Dynamic Stale Synchronous Parallel Distributed Training for Deep Learning
Xing Zhao
Aijun An
Junfeng Liu
Bin Chen
42
57
0
16 Aug 2019
Distilling On-Device Intelligence at the Network Edge
Distilling On-Device Intelligence at the Network Edge
Jihong Park
Shiqiang Wang
Anis Elgabli
Seungeun Oh
Eunjeong Jeong
Han Cha
Hyesung Kim
Seong-Lyun Kim
M. Bennis
26
32
0
16 Aug 2019
Federated Optimization in Heterogeneous Networks
Federated Optimization in Heterogeneous Networks
Tian Li
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
FedML
76
5,105
0
14 Dec 2018
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural
  Network Training
Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training
Liang Luo
Jacob Nelson
Luis Ceze
Amar Phanishayee
Arvind Krishnamurthy
87
120
0
21 May 2018
Continuous control with deep reinforcement learning
Continuous control with deep reinforcement learning
Timothy Lillicrap
Jonathan J. Hunt
Alexander Pritzel
N. Heess
Tom Erez
Yuval Tassa
David Silver
Daan Wierstra
148
13,174
0
09 Sep 2015
1