13
0

Dynamic Online Gradient Descent with Improved Query Complexity: A Theoretical Revisit

Abstract

We provide a new theoretical analysis framework to investigate online gradient descent in the dynamic environment. Comparing with the previous work, the new framework recovers the state-of-the-art dynamic regret, but does not require extra gradient queries for every iteration. Specifically, when functions are α\alpha strongly convex and β\beta smooth, to achieve the state-of-the-art dynamic regret, the previous work requires O(κ)O(\kappa) with κ=βα\kappa = \frac{\beta}{\alpha} queries of gradients at every iteration. But, our framework shows that the query complexity can be improved to be O(1)O(1), which does not depend on κ\kappa. The improvement is significant for ill-conditioned problems because that their objective function usually has a large κ\kappa.

View on arXiv
Comments on this paper