ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.08586
23
9

Bandit Algorithms for Prophet Inequality and Pandora's Box

16 November 2022
Khashayar Gatmiry
Thomas Kesselheim
Sahil Singla
Yishuo Wang
ArXivPDFHTML
Abstract

The Prophet Inequality and Pandora's Box problems are fundamental stochastic problem with applications in Mechanism Design, Online Algorithms, Stochastic Optimization, Optimal Stopping, and Operations Research. A usual assumption in these works is that the probability distributions of the nnn underlying random variables are given as input to the algorithm. Since in practice these distributions need to be learned, we initiate the study of such stochastic problems in the Multi-Armed Bandits model. In the Multi-Armed Bandits model we interact with nnn unknown distributions over TTT rounds: in round ttt we play a policy x(t)x^{(t)}x(t) and receive a partial (bandit) feedback on the performance of x(t)x^{(t)}x(t). The goal is to minimize the regret, which is the difference over TTT rounds in the total value of the optimal algorithm that knows the distributions vs. the total value of our algorithm that learns the distributions from the partial feedback. Our main results give near-optimal O~(poly(n)T)\tilde{O}(\mathsf{poly}(n)\sqrt{T})O~(poly(n)T​) total regret algorithms for both Prophet Inequality and Pandora's Box. Our proofs proceed by maintaining confidence intervals on the unknown indices of the optimal policy. The exploration-exploitation tradeoff prevents us from directly refining these confidence intervals, so the main technique is to design a regret upper bound that is learnable while playing low-regret Bandit policies.

View on arXiv
Comments on this paper