50
0

Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval

Main:18 Pages
3 Figures
Bibliography:3 Pages
Abstract

Reflecting the greater significance of recent history over the distant past in non-stationary environments, λ\lambda-discounted regret has been introduced in online convex optimization (OCO) to gracefully forget past data as new information arrives. When the discount factor λ\lambda is given, online gradient descent with an appropriate step size achieves an O(1/1λ)O(1/\sqrt{1-\lambda}) discounted regret. However, the value of λ\lambda is often not predetermined in real-world scenarios. This gives rise to a significant open question: is it possible to develop a discounted algorithm that adapts to an unknown discount factor. In this paper, we affirmatively answer this question by providing a novel analysis to demonstrate that smoothed OGD (SOGD) achieves a uniform O(logT/1λ)O(\sqrt{\log T/1-\lambda}) discounted regret, holding for all values of λ\lambda across a continuous interval simultaneously. The basic idea is to maintain multiple OGD instances to handle different discount factors, and aggregate their outputs sequentially by an online prediction algorithm named as Discounted-Normal-Predictor (DNP) (Kapralov and Panigrahy,2010). Our analysis reveals that DNP can combine the decisions of two experts, even when they operate on discounted regret with different discount factors.

View on arXiv
@article{yang2025_2505.19491,
  title={ Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval },
  author={ Wenhao Yang and Sifan Yang and Lijun Zhang },
  journal={arXiv preprint arXiv:2505.19491},
  year={ 2025 }
}
Comments on this paper