ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.10245
35
5

Regret Minimization in Isotonic, Heavy-Tailed Contextual Bandits via Adaptive Confidence Bands

19 October 2021
S. Chatterjee
Subhabrata Sen
    OffRL
ArXiv (abs)PDFHTML
Abstract

In this paper we initiate a study of non parametric contextual bandits under shape constraints on the mean reward function. Specifically, we study a setting where the context is one dimensional, and the mean reward function is isotonic with respect to this context. We propose a policy for this problem and show that it attains minimax rate optimal regret. Moreover, we show that the same policy enjoys automatic adaptation; that is, for subclasses of the parameter space where the true mean reward functions are also piecewise constant with kkk pieces, this policy remains minimax rate optimal simultaneously for all k≥1.k \geq 1.k≥1. Automatic adaptation phenomena are well-known for shape constrained problems in the offline setting; %The phenomenon of automatic adaptation of shape constrained methods is known to occur in offline problems; we show that such phenomena carry over to the online setting. The main technical ingredient underlying our policy is a procedure to derive confidence bands for an underlying isotonic function using the isotonic quantile estimator. The confidence band we propose is valid under heavy tailed noise, and its average width goes to 000 at an adaptively optimal rate. We consider this to be an independent contribution to the isotonic regression literature.

View on arXiv
Comments on this paper