ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.02640
33
13

Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions

8 August 2017
P. Beame
S. Gharan
Xin Yang
ArXiv (abs)PDFHTML
Abstract

We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices. As applications that follow from our new technique, we show that any algorithm that learns mmm-variate homogeneous polynomial functions of degree at most ddd over F2\mathbb{F}_2F2​ from evaluations on randomly chosen inputs either requires space Ω(mn)\Omega(mn)Ω(mn) or 2Ω(m)2^{\Omega(m)}2Ω(m) time where n=mΘ(d)n=m^{\Theta(d)}n=mΘ(d) is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.

View on arXiv
Comments on this paper