ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.05852
24
3

Convergence rates of vector-valued local polynomial regression

13 July 2021
Yariv Aizenbud
B. Sober
ArXivPDFHTML
Abstract

Non-parametric estimation of functions as well as their derivatives by means of local-polynomial regression is a subject that was studied in the literature since the late 1970's. Given a set of noisy samples of a Ck\mathcal{C}^kCk smooth function, we perform a local polynomial fit, and by taking its mmm-th derivative we obtain an estimate for the mmm-th function derivative. The known optimal rates of convergence for this problem for a kkk-times smooth function f:Rd→Rf:\mathbb{R}^d \to \mathbb{R}f:Rd→R are n−k−m2k+dn^{-\frac{k-m}{2k + d}}n−2k+dk−m​. However in modern applications it is often the case that we have to estimate a function operating to RD\mathbb{R}^DRD, for D≫dD \gg dD≫d extremely large. In this work, we prove that these same rates of convergence are also achievable by local-polynomial regression in case of a high dimensional target, given some assumptions on the noise distribution. This result is an extension to Stone's seminal work from 1980 to the regime of high-dimensional target domain. In addition, we unveil a connection between the failure probability ε\varepsilonε and the number of samples required to achieve the optimal rates.

View on arXiv
Comments on this paper