ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.08905
27
25

Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond

21 January 2022
Dheeraj Baby
Yu-Xiang Wang
ArXivPDFHTML
Abstract

We study the framework of universal dynamic regret minimization with strongly convex losses. We answer an open problem in Baby and Wang 2021 by showing that in a proper learning setup, Strongly Adaptive algorithms can achieve the near optimal dynamic regret of O~(d1/3n1/3TV[u1:n]2/3∨d)\tilde O(d^{1/3} n^{1/3}\text{TV}[u_{1:n}]^{2/3} \vee d)O~(d1/3n1/3TV[u1:n​]2/3∨d) against any comparator sequence u1,…,unu_1,\ldots,u_nu1​,…,un​ simultaneously, where nnn is the time horizon and TV[u1:n]\text{TV}[u_{1:n}]TV[u1:n​] is the Total Variation of comparator. These results are facilitated by exploiting a number of new structures imposed by the KKT conditions that were not considered in Baby and Wang 2021 which also lead to other improvements over their results such as: (a) handling non-smooth losses and (b) improving the dimension dependence on regret. Further, we also derive near optimal dynamic regret rates for the special case of proper online learning with exp-concave losses and an L∞L_\inftyL∞​ constrained decision set.

View on arXiv
Comments on this paper