ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.11550
12
3

Dynamic Regret for Strongly Adaptive Methods and Optimality of Online KRR

22 November 2021
Dheeraj Baby
Hilaf Hasson
Yuyang Wang
ArXivPDFHTML
Abstract

We consider the framework of non-stationary Online Convex Optimization where a learner seeks to control its dynamic regret against an arbitrary sequence of comparators. When the loss functions are strongly convex or exp-concave, we demonstrate that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret in terms of path variation VTV_TVT​ of the comparator sequence. Specifically, we show that SA algorithms enjoy O~(TVT∨log⁡T)\tilde O(\sqrt{TV_T} \vee \log T)O~(TVT​​∨logT) and O~(dTVT∨dlog⁡T)\tilde O(\sqrt{dTV_T} \vee d\log T)O~(dTVT​​∨dlogT) dynamic regret for strongly convex and exp-concave losses respectively without apriori knowledge of VTV_TVT​. The versatility of the principled approach is further demonstrated by the novel results in the setting of learning against bounded linear predictors and online regression with Gaussian kernels. Under a related setting, the second component of the paper addresses an open question posed by Zhdanov and Kalnishkan (2010) that concerns online kernel regression with squared error losses. We derive a new lower bound on a certain penalized regret which establishes the near minimax optimality of online Kernel Ridge Regression (KRR). Our lower bound can be viewed as an RKHS extension to the lower bound derived in Vovk (2001) for online linear regression in finite dimensions.

View on arXiv
Comments on this paper