ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.08911
19
23

Fast Rates for Nonparametric Online Learning: From Realizability to Learning in Games

17 November 2021
C. Daskalakis
Noah Golowich
ArXivPDFHTML
Abstract

We study fast rates of convergence in the setting of nonparametric online regression, namely where regret is defined with respect to an arbitrary function class which has bounded complexity. Our contributions are two-fold: - In the realizable setting of nonparametric online regression with the absolute loss, we propose a randomized proper learning algorithm which gets a near-optimal cumulative loss in terms of the sequential fat-shattering dimension of the hypothesis class. In the setting of online classification with a class of Littlestone dimension ddd, our bound reduces to d⋅polylog⁡Td \cdot {\rm poly} \log Td⋅polylogT. This result answers a question as to whether proper learners could achieve near-optimal cumulative loss; previously, even for online classification, the best known cumulative loss was O~(dT)\tilde O( \sqrt{dT})O~(dT​). Further, for the real-valued (regression) setting, a cumulative loss bound with near-optimal scaling on sequential fat-shattering dimension was not even known for improper learners, prior to this work. - Using the above result, we exhibit an independent learning algorithm for general-sum binary games of Littlestone dimension ddd, for which each player achieves regret O~(d3/4⋅T1/4)\tilde O(d^{3/4} \cdot T^{1/4})O~(d3/4⋅T1/4). This result generalizes analogous results of Syrgkanis et al. (2015) who showed that in finite games the optimal regret can be accelerated from O(T)O(\sqrt{T})O(T​) in the adversarial setting to O(T1/4)O(T^{1/4})O(T1/4) in the game setting. To establish the above results, we introduce several new techniques, including: a hierarchical aggregation rule to achieve the optimal cumulative loss for real-valued classes, a multi-scale extension of the proper online realizable learner of Hanneke et al. (2021), an approach to show that the output of such nonparametric learning algorithms is stable, and a proof that the minimax theorem holds in all online learnable games.

View on arXiv
Comments on this paper