9
32

Online Learning with Vector Costs and Bandits with Knapsacks

Abstract

We introduce online learning with vector costs (\OLVCp) where in each time step t{1,,T}t \in \{1,\ldots, T\}, we need to play an action i{1,,n}i \in \{1,\ldots,n\} that incurs an unknown vector cost in [0,1]d[0,1]^{d}. The goal of the online algorithm is to minimize the p\ell_p norm of the sum of its cost vectors. This captures the classical online learning setting for d=1d=1, and is interesting for general dd because of applications like online scheduling where we want to balance the load between different machines (dimensions). We study \OLVCp in both stochastic and adversarial arrival settings, and give a general procedure to reduce the problem from dd dimensions to a single dimension. This allows us to use classical online learning algorithms in both full and bandit feedback models to obtain (near) optimal results. In particular, we obtain a single algorithm (up to the choice of learning rate) that gives sublinear regret for stochastic arrivals and a tight O(min{p,logd})O(\min\{p, \log d\}) competitive ratio for adversarial arrivals. The \OLVCp problem also occurs as a natural subproblem when trying to solve the popular Bandits with Knapsacks (\BwK) problem. This connection allows us to use our \OLVCp techniques to obtain (near) optimal results for \BwK in both stochastic and adversarial settings. In particular, we obtain a tight O(logdlogT)O(\log d \cdot \log T) competitive ratio algorithm for adversarial \BwK, which improves over the O(dlogT)O(d \cdot \log T) competitive ratio algorithm of Immorlica et al. [FOCS'19].

View on arXiv
Comments on this paper