ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.03790
22
11

Multi-armed Bandit Requiring Monotone Arm Sequences

7 June 2021
Ningyuan Chen
ArXivPDFHTML
Abstract

In many online learning or multi-armed bandit problems, the taken actions or pulled arms are ordinal and required to be monotone over time. Examples include dynamic pricing, in which the firms use markup pricing policies to please early adopters and deter strategic waiting, and clinical trials, in which the dose allocation usually follows the dose escalation principle to prevent dose limiting toxicities. We consider the continuum-armed bandit problem when the arm sequence is required to be monotone. We show that when the unknown objective function is Lipschitz continuous, the regret is O(T)O(T)O(T). When in addition the objective function is unimodal or quasiconcave, the regret is O~(T3/4)\tilde O(T^{3/4})O~(T3/4) under the proposed algorithm, which is also shown to be the optimal rate. This deviates from the optimal rate O~(T2/3)\tilde O(T^{2/3})O~(T2/3) in the continuous-armed bandit literature and demonstrates the cost to the learning efficiency brought by the monotonicity requirement.

View on arXiv
Comments on this paper