28
14

ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method

Abstract

In this paper, we propose a novel accelerated gradient method called ANITA for solving the fundamental finite-sum optimization problems. Concretely, we consider both general convex and strongly convex settings: i) For general convex finite-sum problems, ANITA improves previous state-of-the-art result given by Varag (Lan et al., 2019). In particular, for large-scale problems or the convergence error is not very small, i.e., n1ϵ2n \geq \frac{1}{\epsilon^2}, ANITA obtains the \emph{first} optimal result O(n)O(n), matching the lower bound Ω(n)\Omega(n) provided by Woodworth and Srebro (2016), while previous results are O(nlog1ϵ)O(n \log \frac{1}{\epsilon}) of Varag (Lan et al., 2019) and O(nϵ)O(\frac{n}{\sqrt{\epsilon}}) of Katyusha (Allen-Zhu, 2017). ii) For strongly convex finite-sum problems, we also show that ANITA can achieve the optimal convergence rate O((n+nLμ)log1ϵ)O\big((n+\sqrt{\frac{nL}{\mu}})\log\frac{1}{\epsilon}\big) matching the lower bound Ω((n+nLμ)log1ϵ)\Omega\big((n+\sqrt{\frac{nL}{\mu}})\log\frac{1}{\epsilon}\big) provided by Lan and Zhou (2015). Besides, ANITA enjoys a simpler loopless algorithmic structure unlike previous accelerated algorithms such as Varag (Lan et al., 2019) and Katyusha (Allen-Zhu, 2017) where they use double-loop structures. Moreover, we provide a novel \emph{dynamic multi-stage convergence analysis}, which is the key technical part for improving previous results to the optimal rates. We believe that our new theoretical rates and novel convergence analysis for the fundamental finite-sum problem will directly lead to key improvements for many other related problems, such as distributed/federated/decentralized optimization problems (e.g., Li and Richt\árik, 2021). Finally, the numerical experiments show that ANITA converges faster than the previous state-of-the-art Varag (Lan et al., 2019), validating our theoretical results and confirming the practical superiority of ANITA.

View on arXiv
Comments on this paper