ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10110
  4. Cited By
Revisiting EXTRA for Smooth Distributed Optimization

Revisiting EXTRA for Smooth Distributed Optimization

24 February 2020
Huan Li
Zhouchen Lin
ArXivPDFHTML

Papers citing "Revisiting EXTRA for Smooth Distributed Optimization"

5 / 5 papers shown
Title
A Dual Approach for Optimal Algorithms in Distributed Optimization over
  Networks
A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks
César A. Uribe
Soomin Lee
Alexander Gasnikov
A. Nedić
46
137
0
03 Sep 2018
Optimal algorithms for smooth and strongly convex distributed
  optimization in networks
Optimal algorithms for smooth and strongly convex distributed optimization in networks
Kevin Scaman
Francis R. Bach
Sébastien Bubeck
Y. Lee
Laurent Massoulié
58
329
0
28 Feb 2017
A Fast Distributed Proximal-Gradient Method
A Fast Distributed Proximal-Gradient Method
Annie I. Chen
Asuman Ozdaglar
97
146
0
08 Oct 2012
Convergence Rates of Inexact Proximal-Gradient Methods for Convex
  Optimization
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
200
583
0
12 Sep 2011
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
  Descent
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
191
2,273
0
28 Jun 2011
1