ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1207.5208
  4. Cited By
Meta-Learning of Exploration/Exploitation Strategies: The Multi-Armed
  Bandit Case

Meta-Learning of Exploration/Exploitation Strategies: The Multi-Armed Bandit Case

22 July 2012
Francis Maes
D. Ernst
L. Wehenkel
ArXiv (abs)PDFHTML

Papers citing "Meta-Learning of Exploration/Exploitation Strategies: The Multi-Armed Bandit Case"

2 / 2 papers shown
Title
The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond
The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond
Aurélien Garivier
Olivier Cappé
174
613
0
12 Feb 2011
X-Armed Bandits
X-Armed Bandits
Sébastien Bubeck
Rémi Munos
Gilles Stoltz
Csaba Szepesvari
157
383
0
25 Jan 2010
1