In this paper we consider the contextual multi-armed bandit problem for linear payoffs under a risk-averse criterion. At each round, contexts are revealed for each arm, and the decision maker chooses one arm to pull and receives the corresponding reward. In particular, we consider mean-variance as the risk criterion, and the best arm is the one with the largest mean-variance reward. We apply the Thompson Sampling algorithm for the disjoint model, and provide a comprehensive regret analysis for a variant of the proposed algorithm. For rounds, actions, and -dimensional feature vectors, we prove a regret bound of that holds with probability under the mean-variance criterion with risk tolerance , for any , . The empirical performance of our proposed algorithms is demonstrated via a portfolio selection problem.
View on arXiv