ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.09365
32
1

L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method

20 August 2021
Bugra Can
Saeed Soori
M. Dehnavi
Mert Gurbuzbalaban
ArXivPDFHTML
Abstract

This work proposes a distributed algorithm for solving empirical risk minimization problems, called L-DQN, under the master/worker communication model. L-DQN is a distributed limited-memory quasi-Newton method that supports asynchronous computations among the worker nodes. Our method is efficient both in terms of storage and communication costs, i.e., in every iteration the master node and workers communicate vectors of size O(d)O(d)O(d), where ddd is the dimension of the decision variable, and the amount of memory required on each node is O(md)O(md)O(md), where mmm is an adjustable parameter. To our knowledge, this is the first distributed quasi-Newton method with provable global linear convergence guarantees in the asynchronous setting where delays between nodes are present. Numerical experiments are provided to illustrate the theory and the practical performance of our method.

View on arXiv
Comments on this paper