ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.00109
49
1
v1v2 (latest)

Ranking in Contextual Multi-Armed Bandits

30 June 2022
Amitis Shidani
George Deligiannidis
Arnaud Doucet
ArXiv (abs)PDFHTML
Abstract

We study a ranking problem in the contextual multi-armed bandit setting. A learning agent selects an ordered list of items at each time step and observes stochastic outcomes for each position. In online recommendation systems, showing an ordered list of the most attractive items would not be the best choice since both position and item dependencies result in a complicated reward function. A very naive example is the lack of diversity when all the most attractive items are from the same category. We model position and item dependencies in the ordered list and design UCB and Thompson Sampling type algorithms for this problem. We prove that the regret bound over TTT rounds and LLL positions is \TildeO(LdT)\Tilde{O}(L\sqrt{d T})\TildeO(LdT​), which has the same order as the previous works with respect to TTT and only increases linearly with LLL. Our work generalizes existing studies in several directions, including position dependencies where position discount is a particular case, and proposes a more general contextual bandit model.

View on arXiv
Comments on this paper