199

Adaptivity and Universality: Problem-dependent Universal Regret for Online Convex Optimization

Main:49 Pages
12 Figures
Bibliography:5 Pages
7 Tables
Appendix:64 Pages
Abstract

Universal online learning aims to achieve optimal regret guarantees without requiring prior knowledge of the curvature of online functions. Existing methods have established minimax-optimal regret bounds for universal online learning, where a single algorithm can simultaneously attain O(T)\mathcal{O}(\sqrt{T}) regret for convex functions, O(dlogT)\mathcal{O}(d \log T) for exp-concave functions, and O(logT)\mathcal{O}(\log T) for strongly convex functions, where TT is the number of rounds and dd is the dimension of the feasible domain. However, these methods still lack problem-dependent adaptivity. In particular, no universal method provides regret bounds that scale with the gradient variation VTV_T, a key quantity that plays a crucial role in applications such as stochastic optimization and fast-rate convergence in games. In this work, we introduce UniGrad, a novel approach that achieves both universality and adaptivity, with two distinct realizations:this http URLandthis http URL. Both methods achieve universal regret guarantees that adapt to gradient variation, simultaneously attaining O(logVT)\mathcal{O}(\log V_T) regret for strongly convex functions and O(dlogVT)\mathcal{O}(d \log V_T) regret for exp-concave functions. For convex functions, the regret bounds differ:this http URLachieves an O(VTlogVT)\mathcal{O}(\sqrt{V_T \log V_T}) bound while preserving the RVU property that is crucial for fast convergence in online games, whereasthis http URLachieves the optimal O(VT)\mathcal{O}(\sqrt{V_T}) regret bound through a novel design. Both methods employ a meta algorithm with O(logT)\mathcal{O}(\log T) base learners, which naturally requires O(logT)\mathcal{O}(\log T) gradient queries per round. To enhance computational efficiency, we introduce UniGrad++, which retains the regret while reducing the gradient query to just 11 per round via surrogate optimization. We further provide various implications.

View on arXiv
Comments on this paper