24
v1v2 (latest)

Small Gradient Norm Regret for Online Convex Optimization

Wenzhi Gao
Chang He
Madeleine Udell
Main:9 Pages
1 Figures
Bibliography:3 Pages
1 Tables
Appendix:13 Pages
Abstract

This paper introduces a new problem-dependent regret measure for online convex optimization with smooth losses. The notion, which we call the GG^\star regret, depends on the cumulative squared gradient norm evaluated at the decision in hindsight t=1T(x)2\sum_{t=1}^T \|\nabla \ell(x^\star)\|^2. We show that the GG^\star regret strictly refines the existing LL^\star (small loss) regret, and that it can be arbitrarily sharper when the losses have vanishing curvature around the hindsight decision. We establish upper and lower bounds on the GG^\star regret and extend our results to dynamic regret and bandit settings. As a byproduct, we refine the existing convergence analysis of stochastic optimization algorithms in the interpolation regime. Some experiments validate our theoretical findings.

View on arXiv
Comments on this paper