ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.13374
22
2

Active Local Learning

31 August 2020
A. Backurs
Avrim Blum
Neha Gupta
ArXivPDFHTML
Abstract

In this work we consider active local learning: given a query point xxx, and active access to an unlabeled training set SSS, output the prediction h(x)h(x)h(x) of a near-optimal h∈Hh \in Hh∈H using significantly fewer labels than would be needed to actually learn hhh fully. In particular, the number of label queries should be independent of the complexity of HHH, and the function hhh should be well-defined, independent of xxx. This immediately also implies an algorithm for distance estimation: estimating the value opt(H)opt(H)opt(H) from many fewer labels than needed to actually learn a near-optimal h∈Hh \in Hh∈H, by running local learning on a few random query points and computing the average error. For the hypothesis class consisting of functions supported on the interval [0,1][0,1][0,1] with Lipschitz constant bounded by LLL, we present an algorithm that makes O((1/ϵ6)log⁡(1/ϵ))O(({1 / \epsilon^6}) \log(1/\epsilon))O((1/ϵ6)log(1/ϵ)) label queries from an unlabeled pool of O((L/ϵ4)log⁡(1/ϵ))O(({L / \epsilon^4})\log(1/\epsilon))O((L/ϵ4)log(1/ϵ)) samples. It estimates the distance to the best hypothesis in the class to an additive error of ϵ\epsilonϵ for an arbitrary underlying distribution. We further generalize our algorithm to more than one dimensions. We emphasize that the number of labels used is independent of the complexity of the hypothesis class which depends on LLL. Furthermore, we give an algorithm to locally estimate the values of a near-optimal function at a few query points of interest with number of labels independent of LLL. We also consider the related problem of approximating the minimum error that can be achieved by the Nadaraya-Watson estimator under a linear diagonal transformation with eigenvalues coming from a small range. For a ddd-dimensional pointset of size NNN, our algorithm achieves an additive approximation of ϵ\epsilonϵ, makes O~(d/ϵ2)\tilde{O}({d}/{\epsilon^2})O~(d/ϵ2) queries and runs in O~(d2/ϵd+4+dN/ϵ2)\tilde{O}({d^2}/{\epsilon^{d+4}}+{dN}/{\epsilon^2})O~(d2/ϵd+4+dN/ϵ2) time.

View on arXiv
Comments on this paper