ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1505.02865
33
14

Asymptotic Behavior of Minimal-Exploration Allocation Policies: Almost Sure, Arbitrarily Slow Growing Regret

12 May 2015
Wesley Cowan
M. Katehakis
ArXivPDFHTML
Abstract

The purpose of this paper is to provide further understanding into the structure of the sequential allocation ("stochastic multi-armed bandit", or MAB) problem by establishing probability one finite horizon bounds and convergence rates for the sample (or "pseudo") regret associated with two simple classes of allocation policies π\piπ. For any slowly increasing function ggg, subject to mild regularity constraints, we construct two policies (the ggg-Forcing, and the ggg-Inflated Sample Mean) that achieve a measure of regret of order O(g(n)) O(g(n))O(g(n)) almost surely as n→∞n \to \inftyn→∞, bound from above and below. Additionally, almost sure upper and lower bounds on the remainder term are established. In the constructions herein, the function ggg effectively controls the "exploration" of the classical "exploration/exploitation" tradeoff.

View on arXiv
Comments on this paper