ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.08299
  4. Cited By
Decentralized Local Updates with Dual-Slow Estimation and Momentum-based
  Variance-Reduction for Non-Convex Optimization

Decentralized Local Updates with Dual-Slow Estimation and Momentum-based Variance-Reduction for Non-Convex Optimization

17 July 2023
Kangyang Luo
Kunkun Zhang
Sheng Zhang
Xiang Li
Ming Gao
ArXivPDFHTML

Papers citing "Decentralized Local Updates with Dual-Slow Estimation and Momentum-based Variance-Reduction for Non-Convex Optimization"

4 / 4 papers shown
Title
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot
  Federated Learning
DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning
Kangyang Luo
Shuai Wang
Y. Fu
Renrong Shao
Xiang Li
Yunshi Lan
Ming Gao
Jinlong Shu
FedML
41
2
0
12 Sep 2024
Privacy-Preserving Federated Learning with Consistency via Knowledge
  Distillation Using Conditional Generator
Privacy-Preserving Federated Learning with Consistency via Knowledge Distillation Using Conditional Generator
Kangyang Luo
Shuai Wang
Xiang Li
Yunshi Lan
Ming Gao
Jinlong Shu
FedML
30
1
0
11 Sep 2024
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
55
61
0
24 Apr 2021
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
182
683
0
07 Dec 2010
1