ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.05020
  4. Cited By
Distributed Deep Learning with Event-Triggered Communication

Distributed Deep Learning with Event-Triggered Communication

8 September 2019
Jemin George
Prudhvi K. Gurram
ArXivPDFHTML

Papers citing "Distributed Deep Learning with Event-Triggered Communication"

12 / 12 papers shown
Title
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
65
163
0
23 May 2019
On Nonconvex Optimization for Machine Learning: Gradients,
  Stochasticity, and Saddle Points
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points
Chi Jin
Praneeth Netrapalli
Rong Ge
Sham Kakade
Michael I. Jordan
79
61
0
13 Feb 2019
Sharp Analysis for Nonconvex SGD Escaping from Saddle Points
Sharp Analysis for Nonconvex SGD Escaping from Saddle Points
Cong Fang
Zhouchen Lin
Tong Zhang
71
104
0
01 Feb 2019
Stochastic Gradient Push for Distributed Deep Learning
Stochastic Gradient Push for Distributed Deep Learning
Mahmoud Assran
Nicolas Loizou
Nicolas Ballas
Michael G. Rabbat
79
346
0
27 Nov 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
166
349
0
22 Aug 2018
Distributed Stochastic Gradient Tracking Methods
Distributed Stochastic Gradient Tracking Methods
Shi Pu
A. Nedić
64
291
0
25 May 2018
D$^2$: Decentralized Training over Decentralized Data
D2^22: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
37
350
0
19 Mar 2018
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
FedML
59
180
0
23 Jun 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
306
4,646
0
18 Oct 2016
NEXT: In-Network Nonconvex Optimization
NEXT: In-Network Nonconvex Optimization
P. Lorenzo
G. Scutari
101
508
0
01 Feb 2016
Parallel and Distributed Methods for Nonconvex Optimization--Part II:
  Applications
Parallel and Distributed Methods for Nonconvex Optimization--Part II: Applications
G. Scutari
F. Facchinei
Lorenzo Lampariello
Peiran Song
Stefania Sardellitti
42
258
0
15 Jan 2016
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
  Descent
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
198
2,273
0
28 Jun 2011
1