ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.11267
85
5
v1v2 (latest)

Online Convex Optimization with Stochastic Constraints: Zero Constraint Violation and Bandit Feedback

26 January 2023
Y. Kim
Dabeen Lee
ArXiv (abs)PDFHTML
Abstract

This paper studies online convex optimization with stochastic constraints. We propose a variant of the drift-plus-penalty algorithm that guarantees O(T)O(\sqrt{T})O(T​) expected regret and zero constraint violation, after a fixed number of iterations, which improves the vanilla drift-plus-penalty method with O(T)O(\sqrt{T})O(T​) constraint violation. Our algorithm is oblivious to the length of the time horizon TTT, in contrast to the vanilla drift-plus-penalty method. This is based on our novel drift lemma that provides time-varying bounds on the virtual queue drift and, as a result, leads to time-varying bounds on the expected virtual queue length. Moreover, we extend our framework to stochastic-constrained online convex optimization under two-point bandit feedback. We show that by adapting our algorithmic framework to the bandit feedback setting, we may still achieve O(T)O(\sqrt{T})O(T​) expected regret and zero constraint violation, improving upon the previous work for the case of identical constraint functions. Numerical results demonstrate our theoretical results.

View on arXiv
Comments on this paper