ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.05134
  4. Cited By
DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

16 January 2019
Rixon Crane
Fred Roosta
ArXivPDFHTML

Papers citing "DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization"

12 / 12 papers shown
Title
Federated Adapter on Foundation Models: An Out-Of-Distribution Approach
Federated Adapter on Foundation Models: An Out-Of-Distribution Approach
Yiyuan Yang
Guodong Long
Dinesh Manocha
Qinghua Lu
Shanshan Ye
Jing Jiang
OODD
246
1
0
02 May 2025
Matching Pursuit Based Scheduling for Over-the-Air Federated Learning
Matching Pursuit Based Scheduling for Over-the-Air Federated Learning
Ali Bereyhi
Adela Vagollari
S. Asaad
R. Muller
W. Gerstacker
H. Vincent Poor
28
6
0
14 Jun 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
40
16
0
07 Jun 2022
Over-the-Air Federated Learning via Second-Order Optimization
Over-the-Air Federated Learning via Second-Order Optimization
Peng Yang
Yuning Jiang
Ting Wang
Yong Zhou
Yuanming Shi
Colin N. Jones
47
28
0
29 Mar 2022
SHED: A Newton-type algorithm for federated learning based on
  incremental Hessian eigenvector sharing
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
31
14
0
11 Feb 2022
Basis Matters: Better Communication-Efficient Second Order Methods for
  Federated Learning
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
24
23
0
02 Nov 2021
A Stochastic Newton Algorithm for Distributed Convex Optimization
A Stochastic Newton Algorithm for Distributed Convex Optimization
Brian Bullins
Kumar Kshitij Patel
Ohad Shamir
Nathan Srebro
Blake E. Woodworth
28
15
0
07 Oct 2021
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
Bugra Can
Saeed Soori
M. Dehnavi
Mert Gurbuzbalaban
40
2
0
20 Aug 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
33
78
0
05 Jun 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
34
51
0
14 Feb 2021
rTop-k: A Statistical Estimation Approach to Distributed SGD
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
32
65
0
21 May 2020
Communication-Efficient Accurate Statistical Estimation
Communication-Efficient Accurate Statistical Estimation
Jianqing Fan
Yongyi Guo
Kaizheng Wang
19
110
0
12 Jun 2019
1