ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.11343
  4. Cited By
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD
  Algorithm

APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm

26 August 2020
Hanlin Tang
Shaoduo Gan
Samyam Rajbhandari
Xiangru Lian
Ji Liu
Yuxiong He
Ce Zhang
ArXivPDFHTML

Papers citing "APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm"

5 / 5 papers shown
Title
PowerSGD: Practical Low-Rank Gradient Compression for Distributed
  Optimization
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
56
320
0
31 May 2019
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass
  Error-Compensated Compression
DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression
Hanlin Tang
Xiangru Lian
Chen Yu
Tong Zhang
Ji Liu
33
217
0
15 May 2019
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Adaptive Gradient Methods with Dynamic Bound of Learning Rate
Liangchen Luo
Yuanhao Xiong
Yan Liu
Xu Sun
ODL
48
600
0
26 Feb 2019
Asynchronous Stochastic Gradient Descent with Delay Compensation
Asynchronous Stochastic Gradient Descent with Delay Compensation
Shuxin Zheng
Qi Meng
Taifeng Wang
Wei Chen
Nenghai Yu
Zhiming Ma
Tie-Yan Liu
86
314
0
27 Sep 2016
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
115
6,619
0
22 Dec 2012
1