28
0

Beyond O(T)\mathcal{O}(\sqrt{T}) Regret: Decoupling Learning and Decision-making in Online Linear Programming

Abstract

Online linear programming plays an important role in both revenue management and resource allocation, and recent research has focused on developing efficient first-order online learning algorithms. Despite the empirical success of first-order methods, they typically achieve a regret no better than O(T)\mathcal{O} ( \sqrt{T} ), which is suboptimal compared to the O(logT)\mathcal{O} (\log T) bound guaranteed by the state-of-the-art linear programming (LP)-based online algorithms. This paper establishes a general framework that improves upon the O(T)\mathcal{O} ( \sqrt{T} ) result when the LP dual problem exhibits certain error bound conditions. For the first time, we show that first-order learning algorithms achieve o(T)o( \sqrt{T} ) regret in the continuous support setting and O(logT)\mathcal{O} (\log T) regret in the finite support setting beyond the non-degeneracy assumption. Our results significantly improve the state-of-the-art regret results and provide new insights for sequential decision-making.

View on arXiv
Comments on this paper