ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07881
32
8

k-expertsk\texttt{-experts}k-experts -- Online Policies and Fundamental Limits

15 October 2021
S. Mukhopadhyay
Sourav Sahoo
Abhishek Sinha
    OffRL
ArXivPDFHTML
Abstract

We introduce the \texttt{k-experts} problem - a generalization of the classic Prediction with Expert's Advice framework. Unlike the classic version, where the learner selects exactly one expert from a pool of NNN experts at each round, in this problem, the learner can select a subset of kkk experts at each round (1≤k≤N)(1\leq k\leq N)(1≤k≤N). The reward obtained by the learner at each round is assumed to be a function of the kkk selected experts. The primary objective is to design an online learning policy with a small regret. In this pursuit, we propose SAGE\texttt{SAGE}SAGE (Sa\textbf{Sa}Sampled Hedge\textbf{ge}ge) - a framework for designing efficient online learning policies by leveraging statistical sampling techniques. For a wide class of reward functions, we show that SAGE\texttt{SAGE}SAGE either achieves the first sublinear regret guarantee or improves upon the existing ones. Furthermore, going beyond the notion of regret, we fully characterize the mistake bounds achievable by online learning policies for stable loss functions. We conclude the paper by establishing a tight regret lower bound for a variant of the \texttt{k-experts} problem and carrying out experiments with standard datasets.

View on arXiv
Comments on this paper