ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08779
223
82
v1v2 (latest)

Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously

25 January 2019
Julian Zimmert
Haipeng Luo
Chen-Yu Wei
ArXiv (abs)PDFHTML
Abstract

We develop the first general semi-bandit algorithm that simultaneously achieves O(log⁡T)\mathcal{O}(\log T)O(logT) regret for stochastic environments and O(T)\mathcal{O}(\sqrt{T})O(T​) regret for adversarial environments without knowledge of the regime or the number of rounds TTT. The leading problem-dependent constants of our bounds are not only optimal in some worst-case sense studied previously, but also optimal for two concrete instances of semi-bandit problems. Our algorithm and analysis extend the recent work of (Zimmert & Seldin, 2019) for the special case of multi-armed bandit, but importantly requires a novel hybrid regularizer designed specifically for semi-bandit. Experimental results on synthetic data show that our algorithm indeed performs well uniformly over different environments. We finally provide a preliminary extension of our results to the full bandit feedback.

View on arXiv
Comments on this paper