ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.16168
34
1

Multi-Player Approaches for Dueling Bandits

25 May 2024
Or Raveh
Junya Honda
Masashi Sugiyama
ArXivPDFHTML
Abstract

Various approaches have emerged for multi-armed bandits in distributed systems. The multiplayer dueling bandit problem, common in scenarios with only preference-based information like human feedback, introduces challenges related to controlling collaborative exploration of non-informative arm pairs, but has received little attention. To fill this gap, we demonstrate that the direct use of a Follow Your Leader black-box approach matches the lower bound for this setting when utilizing known dueling bandit algorithms as a foundation. Additionally, we analyze a message-passing fully distributed approach with a novel Condorcet-winner recommendation protocol, resulting in expedited exploration in many cases. Our experimental comparisons reveal that our multiplayer algorithms surpass single-player benchmark algorithms, underscoring their efficacy in addressing the nuanced challenges of the multiplayer dueling bandit setting.

View on arXiv
@article{raveh2025_2405.16168,
  title={ Multi-Player Approaches for Dueling Bandits },
  author={ Or Raveh and Junya Honda and Masashi Sugiyama },
  journal={arXiv preprint arXiv:2405.16168},
  year={ 2025 }
}
Comments on this paper