312

Alada: Alternating Adaptation of Momentum Method for Memory-Efficient Matrix Optimization

Xiaoyu He
Yu Cai
Jin Jia
Canxi Huang
Wenqing Chen
Zibin Zheng
Main:8 Pages
6 Figures
Bibliography:2 Pages
Abstract

This work proposes Alada, an adaptive momentum method for stochastic optimization over large-scale matrices. Alada employs a rank-one factorization approach to estimate the second moment of gradients, where factors are updated alternatively to minimize the estimation error. Alada achieves sublinear memory overheads and can be readily extended to optimizing tensor-shapedthis http URLalso equip Alada with a first moment estimation rule, which enhances the algorithm's robustness without incurring additional memory overheads. The theoretical performance of Alada aligns with that of traditional methods such as Adam. Numerical studies conducted on several natural language processing tasks demonstrate the reduction in memory overheads and the robustness in training large models relative to Adam and its variants.

View on arXiv
Comments on this paper