ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.03118
32
24

Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

6 September 2019
Jianjun Yuan
Andrew G. Lamperski
ArXivPDFHTML
Abstract

Recursive least-squares algorithms often use forgetting factors as a heuristic to adapt to non-stationary data streams. The first contribution of this paper rigorously characterizes the effect of forgetting factors for a class of online Newton algorithms. For exp-concave and strongly convex objectives, the algorithms achieve the dynamic regret of max⁡{O(log⁡T),O(TV)}\max\{O(\log T),O(\sqrt{TV})\}max{O(logT),O(TV​)}, where VVV is a bound on the path length of the comparison sequence. In particular, we show how classic recursive least-squares with a forgetting factor achieves this dynamic regret bound. By varying VVV, we obtain a trade-off between static and dynamic regret. In order to obtain more computationally efficient algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions. Our gradient descent rule recovers the order optimal dynamic regret bounds described above. For smooth problems, we can also obtain static regret of O(T1−β)O(T^{1-\beta})O(T1−β) and dynamic regret of O(TβV∗)O(T^\beta V^*)O(TβV∗), where β∈(0,1)\beta \in (0,1)β∈(0,1) and V∗V^*V∗ is the path length of the sequence of minimizers. By varying β\betaβ, we obtain a trade-off between static and dynamic regret.

View on arXiv
Comments on this paper