ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19175
36
5

Online Linear Regression in Dynamic Environments via Discounting

29 May 2024
Andrew Jacobsen
Ashok Cutkosky
ArXivPDFHTML
Abstract

We develop algorithms for online linear regression which achieve optimal static and dynamic regret guarantees \emph{even in the complete absence of prior knowledge}. We present a novel analysis showing that a discounted variant of the Vovk-Azoury-Warmuth forecaster achieves dynamic regret of the form RT(u⃗)≤O(dlog⁡(T)∨dPTγ(u⃗)T)R_{T}(\vec{u})\le O\left(d\log(T)\vee \sqrt{dP_{T}^{\gamma}(\vec{u})T}\right)RT​(u)≤O(dlog(T)∨dPTγ​(u)T​), where PTγ(u⃗)P_{T}^{\gamma}(\vec{u})PTγ​(u) is a measure of variability of the comparator sequence, and show that the discount factor achieving this result can be learned on-the-fly. We show that this result is optimal by providing a matching lower bound. We also extend our results to \emph{strongly-adaptive} guarantees which hold over every sub-interval [a,b]⊆[1,T][a,b]\subseteq[1,T][a,b]⊆[1,T] simultaneously.

View on arXiv
Comments on this paper