ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.02246
  4. Cited By
On Convergence of Distributed Approximate Newton Methods: Globalization,
  Sharper Bounds and Beyond

On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond

6 August 2019
Xiao-Tong Yuan
Ping Li
ArXivPDFHTML

Papers citing "On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond"

9 / 9 papers shown
Title
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
25
11
0
15 Apr 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient
  Communications for Distributed Variational Inequalities
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
29
10
0
15 Feb 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
34
2
0
09 Jan 2023
Scalable K-FAC Training for Deep Neural Networks with Distributed
  Preconditioning
Scalable K-FAC Training for Deep Neural Networks with Distributed Preconditioning
Lin Zhang
S. Shi
Wei Wang
Bo-wen Li
36
10
0
30 Jun 2022
Optimal Gradient Sliding and its Application to Distributed Optimization
  Under Similarity
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity
D. Kovalev
Aleksandr Beznosikov
Ekaterina Borodich
Alexander Gasnikov
G. Scutari
36
12
0
30 May 2022
Acceleration in Distributed Optimization under Similarity
Acceleration in Distributed Optimization under Similarity
Helena Lofstrom
G. Scutari
Tianyue Cao
Alexander Gasnikov
24
26
0
24 Oct 2021
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Zhuangdi Zhu
Junyuan Hong
Jiayu Zhou
FedML
27
630
0
20 May 2021
Newton Method over Networks is Fast up to the Statistical Precision
Newton Method over Networks is Fast up to the Statistical Precision
Amir Daneshmand
G. Scutari
Pavel Dvurechensky
Alexander Gasnikov
27
22
0
12 Feb 2021
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep
  Learning Ads Systems
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems
Weijie Zhao
Deping Xie
Ronglai Jia
Yulei Qian
Rui Ding
Mingming Sun
P. Li
MoE
59
150
0
12 Mar 2020
1