482

Stochastic Rank-1 Bandits

Abstract

We propose stochastic rank-11 bandits, a class of online learning problems where at each step a learning agent chooses a pair of row and column arms, and receives the product of their payoffs as a reward. The main challenge of the problem is that the learning agent does not observe the payoffs of the individual arms, only their product. The payoffs of the row and column arms are stochastic, and independent of each other. We propose a computationally-efficient algorithm for solving our problem, Rank1Elim, and derive a O((K+L)(1/Δ)logn)O((K + L) (1 / \Delta) \log n) upper bound on its nn-step regret, where KK is the number of rows, LL is the number of columns, and Δ\Delta is the minimum gap in the row and column payoffs. To the best of our knowledge, this is the first bandit algorithm for stochastic rank-11 matrix factorization whose regret is linear in K+LK + L, 1/Δ1 / \Delta, and logn\log n. We evaluate Rank1Elim on a synthetic problem and show that its regret scales as suggested by our upper bound. We also compare it to UCB1, and show significant improvements as KK and LL increase.

View on arXiv
Comments on this paper