ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.11828
44
9

Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations

17 June 2024
Kazusato Oko
Yujin Song
Taiji Suzuki
Denny Wu
    MLT
ArXiv (abs)PDFHTML
Abstract

We study the computational and sample complexity of learning a target function f∗:Rd→Rf_*:\mathbb{R}^d\to\mathbb{R}f∗​:Rd→R with additive structure, that is, f∗(x)=1M∑m=1Mfm(⟨x,vm⟩)f_*(x) = \frac{1}{\sqrt{M}}\sum_{m=1}^M f_m(\langle x, v_m\rangle)f∗​(x)=M​1​∑m=1M​fm​(⟨x,vm​⟩), where f1,f2,...,fM:R→Rf_1,f_2,...,f_M:\mathbb{R}\to\mathbb{R}f1​,f2​,...,fM​:R→R are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features {vm}m=1M\{v_m\}_{m=1}^M{vm​}m=1M​, and the number of additive tasks MMM grows with the dimensionality M≍dγM\asymp d^\gammaM≍dγ for γ≥0\gamma\ge 0γ≥0. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of "skills" that are often localized in distinct parts of the trained network. We prove that a large subset of polynomial f∗f_*f∗​ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks MMM and the information exponent of fmf_mfm​, despite the unknown link function and MMM growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.

View on arXiv
Comments on this paper