ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.12745
63
36

Improved Variance-Aware Confidence Sets for Linear Bandits and Linear Mixture MDP

29 January 2021
Zihan Zhang
Jiaqi Yang
Xiangyang Ji
S. Du
ArXivPDFHTML
Abstract

This paper presents new \emph{variance-aware} confidence sets for linear bandits and linear mixture Markov Decision Processes (MDPs). With the new confidence sets, we obtain the follow regret bounds: For linear bandits, we obtain an O~(poly(d)1+∑k=1Kσk2)\tilde{O}(poly(d)\sqrt{1 + \sum_{k=1}^{K}\sigma_k^2})O~(poly(d)1+∑k=1K​σk2​​) data-dependent regret bound, where ddd is the feature dimension, KKK is the number of rounds, and σk2\sigma_k^2σk2​ is the \emph{unknown} variance of the reward at the kkk-th round. This is the first regret bound that only scales with the variance and the dimension but \emph{no explicit polynomial dependency on KKK}. When variances are small, this bound can be significantly smaller than the Θ~(dK)\tilde{\Theta}\left(d\sqrt{K}\right)Θ~(dK​) worst-case regret bound. For linear mixture MDPs, we obtain an O~(poly(d,log⁡H)K)\tilde{O}(poly(d, \log H)\sqrt{K})O~(poly(d,logH)K​) regret bound, where ddd is the number of base models, KKK is the number of episodes, and HHH is the planning horizon. This is the first regret bound that only scales \emph{logarithmically} with HHH in the reinforcement learning with linear function approximation setting, thus \emph{exponentially improving} existing results, and resolving an open problem in \citep{zhou2020nearly}. We develop three technical ideas that may be of independent interest: 1) applications of the peeling technique to both the input norm and the variance magnitude, 2) a recursion-based estimator for the variance, and 3) a new convex potential lemma that generalizes the seminal elliptical potential lemma.

View on arXiv
Comments on this paper