ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.08360
16
8

Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach

17 July 2023
Yu-Hu Yan
Peng Zhao
Zhiguang Zhou
ArXivPDFHTML
Abstract

In this paper, we propose an online convex optimization approach with two different levels of adaptivity. On a higher level, our approach is agnostic to the unknown types and curvatures of the online functions, while at a lower level, it can exploit the unknown niceness of the environments and attain problem-dependent guarantees. Specifically, we obtain O(log⁡VT)\mathcal{O}(\log V_T)O(logVT​), O(dlog⁡VT)\mathcal{O}(d \log V_T)O(dlogVT​) and O^(VT)\hat{\mathcal{O}}(\sqrt{V_T})O^(VT​​) regret bounds for strongly convex, exp-concave and convex loss functions, respectively, where ddd is the dimension, VTV_TVT​ denotes problem-dependent gradient variations and the O^(⋅)\hat{\mathcal{O}}(\cdot)O^(⋅)-notation omits log⁡VT\log V_TlogVT​ factors. Our result not only safeguards the worst-case guarantees but also directly implies the small-loss bounds in analysis. Moreover, when applied to adversarial/stochastic convex optimization and game theory problems, our result enhances the existing universal guarantees. Our approach is based on a multi-layer online ensemble framework incorporating novel ingredients, including a carefully designed optimism for unifying diverse function types and cascaded corrections for algorithmic stability. Notably, despite its multi-layer structure, our algorithm necessitates only one gradient query per round, making it favorable when the gradient evaluation is time-consuming. This is facilitated by a novel regret decomposition equipped with carefully designed surrogate losses.

View on arXiv
Comments on this paper