ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10008
32
4
v1v2v3 (latest)

Conditional regression for single-index models

23 February 2020
A. Lanteri
Mauro Maggioni
Stefano Vigogna
ArXiv (abs)PDFHTML
Abstract

The single-index model is a statistical model for intrinsic regression where the responses are assumed to depend on a single yet unknown linear combination of the predictors, allowing to express the regression function as E[Y∣X]=f(⟨v,X⟩) \mathbb{E} [ Y | X ] = f ( \langle v , X \rangle ) E[Y∣X]=f(⟨v,X⟩) for some unknown index vector vvv and link function fff. Estimators converging at the 111-dimensional min-max rate exist, but their implementation has exponential cost in the ambient dimension. Recent attempts at mitigating the computational cost yield estimators that are computable in polynomial time, but do not achieve the optimal rate. Conditional methods estimate the index vector vvv by averaging moments of XXX conditioned on YYY, but do not provide generalization bounds on fff. In this paper we develop an extensive non-asymptotic analysis of several conditional methods, and propose a new one that combines some benefits of the existing approaches. In particular, we establish n\sqrt{n}n​-consistency for all conditional methods considered. Moreover, we prove that polynomial partitioning estimates achieve the 111-dimensional min-max rate for regression of H\"older functions when combined to any n\sqrt{n}n​-consistent index estimator. Overall this yields an estimator for dimension reduction and regression of single-index models that attains statistical and computational optimality, thereby closing the statistical-computational gap for this problem.

View on arXiv
Comments on this paper