ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01921
41
4

Second Order Path Variationals in Non-Stationary Online Learning

4 May 2022
Dheeraj Baby
Yu-Xiang Wang
ArXivPDFHTML
Abstract

We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses. We show that appropriately designed Strongly Adaptive algorithms achieve a dynamic regret of O~(d2n1/5Cn2/5∨d2)\tilde O(d^2 n^{1/5} C_n^{2/5} \vee d^2)O~(d2n1/5Cn2/5​∨d2), where nnn is the time horizon and CnC_nCn​ a path variational based on second order differences of the comparator sequence. Such a path variational naturally encodes comparator sequences that are piecewise linear -- a powerful family that tracks a variety of non-stationarity patterns in practice (Kim et al, 2009). The aforementioned dynamic regret rate is shown to be optimal modulo dimension dependencies and poly-logarithmic factors of nnn. Our proof techniques rely on analysing the KKT conditions of the offline oracle and requires several non-trivial generalizations of the ideas in Baby and Wang, 2021, where the latter work only leads to a slower dynamic regret rate of O~(d2.5n1/3Cn2/3∨d2.5)\tilde O(d^{2.5}n^{1/3}C_n^{2/3} \vee d^{2.5})O~(d2.5n1/3Cn2/3​∨d2.5) for the current problem.

View on arXiv
Comments on this paper