ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.00119
13
4

Contextual Bandits with Latent Confounders: An NMF Approach

1 June 2016
Rajat Sen
Karthikeyan Shanmugam
Murat Kocaoglu
A. Dimakis
Sanjay Shakkottai
ArXivPDFHTML
Abstract

Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are LLL observed contexts and KKK arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality mmm (m≪L,Km \ll L,Km≪L,K). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the L×KL \times KL×K mean reward matrix U\mathbf{U}U (for each context in [L][L][L] and each arm in [K][K][K]) factorizes into non-negative factors A\mathbf{A}A (L×mL \times mL×m) and W\mathbf{W}W (m×Km \times Km×K). This insight enables us to propose an ϵ\epsilonϵ-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of O(Lpoly(m,log⁡K)log⁡T)\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log T \right)O(Lpoly(m,logK)logT) at time TTT, as compared to O(LKlog⁡T)\mathcal{O}(LK\log T)O(LKlogT) for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of O(Kmlog⁡T)\mathcal{O}\left(Km\log T\right)O(KmlogT). These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.

View on arXiv
Comments on this paper