ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.10703
70
0

Doubly-Bounded Queue for Constrained Online Learning: Keeping Pace with Dynamics of Both Loss and Constraint

14 December 2024
Juncheng Wang
Bingjie Yan
Yituo Liu
ArXivPDFHTML
Abstract

We consider online convex optimization with time-varying constraints and conduct performance analysis using two stringent metrics: dynamic regret with respect to the online solution benchmark, and hard constraint violation that does not allow any compensated violation over time. We propose an efficient algorithm called Constrained Online Learning with Doubly-bounded Queue (COLDQ), which introduces a novel virtual queue that is both lower and upper bounded, allowing tight control of the constraint violation without the need for the Slater condition. We prove via a new Lyapunov drift analysis that COLDQ achieves O(T1+Vx2)O(T^\frac{1+V_x}{2})O(T21+Vx​​) dynamic regret and O(TVg)O(T^{V_g})O(TVg​) hard constraint violation, where VxV_xVx​ and VgV_gVg​ capture the dynamics of the loss and constraint functions. For the first time, the two bounds smoothly approach to the best-known O(T12)O(T^\frac{1}{2})O(T21​) regret and O(1)O(1)O(1) violation, as the dynamics of the losses and constraints diminish. For strongly convex loss functions, COLDQ matches the best-known O(log⁡T)O(\log{T})O(logT) static regret while maintaining the O(TVg)O(T^{V_g})O(TVg​) hard constraint violation. We further introduce an expert-tracking variation of COLDQ, which achieves the same performance bounds without any prior knowledge of the system dynamics. Simulation results demonstrate that COLDQ outperforms the state-of-the-art approaches.

View on arXiv
Comments on this paper