ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.05733
44
0

Batched Stochastic Bandit for Nondegenerate Functions

9 May 2024
Yu Liu
Yunlu Shu
Tianyu Wang
ArXivPDFHTML
Abstract

This paper studies batched bandit learning problems for nondegenerate functions. We introduce an algorithm that solves the batched bandit problem for nondegenerate functions near-optimally. More specifically, we introduce an algorithm, called Geometric Narrowing (GN), whose regret bound is of order O~(A+dT)\widetilde{\mathcal{O}} ( A_{+}^d \sqrt{T} )O(A+d​T​). In addition, GN only needs O(log⁡log⁡T)\mathcal{O} (\log \log T)O(loglogT) batches to achieve this regret. We also provide lower bound analysis for this problem. More specifically, we prove that over some (compact) doubling metric space of doubling dimension ddd: 1. For any policy π\piπ, there exists a problem instance on which π\piπ admits a regret of order Ω(A−dT){\Omega} ( A_-^d \sqrt{T})Ω(A−d​T​); 2. No policy can achieve a regret of order A−dT A_-^d \sqrt{T} A−d​T​ over all problem instances, using less than Ω(log⁡log⁡T) \Omega ( \log \log T ) Ω(loglogT) rounds of communications. Our lower bound analysis shows that the GN algorithm achieves near optimal regret with minimal number of batches.

View on arXiv
@article{liu2025_2405.05733,
  title={ Batched Stochastic Bandit for Nondegenerate Functions },
  author={ Yu Liu and Yunlu Shu and Tianyu Wang },
  journal={arXiv preprint arXiv:2405.05733},
  year={ 2025 }
}
Comments on this paper