62
100

Learning Functions of Few Arbitrary Linear Parameters in High Dimensions

Abstract

Let us assume that ff is a continuous function defined on the unit ball of Rd\mathbb R^d, of the form f(x)=g(Ax)f(x) = g (A x), where AA is a k×dk \times d matrix and gg is a function of kk variables for kdk \ll d. We are given a budget mNm \in \mathbb N of possible point evaluations f(xi)f(x_i), i=1,...,mi=1,...,m, of ff, which we are allowed to query in order to construct a uniform approximating function. Under certain smoothness and variation assumptions on the function gg, and an {\it arbitrary} choice of the matrix AA, we present in this paper 1. a sampling choice of the points {xi}\{x_i\} drawn at random for each function approximation; 2. algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension dd and in the number mm of points. Due to the arbitrariness of AA, the choice of the sampling points will be according to suitable random distributions and our results hold with overwhelming probability. Our approach uses tools taken from the {\it compressed sensing} framework, recent Chernoff bounds for sums of positive-semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions.

View on arXiv
Comments on this paper