ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.06752
  4. Cited By
A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex
  Optimization

A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization

12 February 2021
Ran Xin
U. Khan
S. Kar
ArXivPDFHTML

Papers citing "A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization"

6 / 6 papers shown
Title
Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex
  Optimization
Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex Optimization
Zhenwei Lin
Jingfan Xia
Qi Deng
Luo Luo
31
3
0
18 Oct 2023
Variance-reduced accelerated methods for decentralized stochastic
  double-regularized nonconvex strongly-concave minimax problems
Variance-reduced accelerated methods for decentralized stochastic double-regularized nonconvex strongly-concave minimax problems
Gabriel Mancino-Ball
Yangyang Xu
20
8
0
14 Jul 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved Convergence
Kun-Yen Huang
Linli Zhou
Shi Pu
24
4
0
21 Jun 2023
Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized
  Learning: Part I
Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning: Part I
Jiaojiao Zhang
Huikang Liu
Anthony Man-Cho So
Qing Ling
24
14
0
19 Jan 2022
A Unified and Refined Convergence Analysis for Non-Convex Decentralized
  Learning
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
40
57
0
19 Oct 2021
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
38
72
0
15 Jun 2020
1