14
1

Exponential Weights Algorithms for Selective Learning

Abstract

We study the selective learning problem introduced by Qiao and Valiant (2019), in which the learner observes nn labeled data points one at a time. At a time of its choosing, the learner selects a window length ww and a model ^\hat\ell from the model class L\mathcal{L}, and then labels the next ww data points using ^\hat\ell. The excess risk incurred by the learner is defined as the difference between the average loss of ^\hat\ell over those ww data points and the smallest possible average loss among all models in L\mathcal{L} over those ww data points. We give an improved algorithm, termed the hybrid exponential weights algorithm, that achieves an expected excess risk of O((loglogL+loglogn)/logn)O((\log\log|\mathcal{L}| + \log\log n)/\log n). This result gives a doubly exponential improvement in the dependence on L|\mathcal{L}| over the best known bound of O(L/logn)O(\sqrt{|\mathcal{L}|/\log n}). We complement the positive result with an almost matching lower bound, which suggests the worst-case optimality of the algorithm. We also study a more restrictive family of learning algorithms that are bounded-recall in the sense that when a prediction window of length ww is chosen, the learner's decision only depends on the most recent ww data points. We analyze an exponential weights variant of the ERM algorithm in Qiao and Valiant (2019). This new algorithm achieves an expected excess risk of O(logL/logn)O(\sqrt{\log |\mathcal{L}|/\log n}), which is shown to be nearly optimal among all bounded-recall learners. Our analysis builds on a generalized version of the selective mean prediction problem in Drucker (2013); Qiao and Valiant (2019), which may be of independent interest.

View on arXiv
Comments on this paper