Testing the Manifold Hypothesis

The hypothesis that high dimensional data tend to lie in the vicinity of a low dimensional manifold is the basis of manifold learning. The goal of this paper is to develop an algorithm (with accompanying complexity guarantees) for fitting a manifold to an unknown probability distribution supported in a separable Hilbert space, only using i.i.d samples from that distribution. More precisely, our setting is the following. Suppose that data are drawn independently at random from a probability distribution supported on the unit ball of a separable Hilbert space . Let be the set of submanifolds of the unit ball of whose volume is at most and reach (which is the supremum of all such that any point at a distance less than has a unique nearest point on the manifold) is at least . Let denote mean-squared distance of a random point from the probability distribution to . We obtain an algorithm that tests the manifold hypothesis in the following sense. The algorithm takes i.i.d random samples from as input, and determines which of the following two is true (at least one must be): (a) There exists such that (b) There exists no such that The answer is correct with probability at least .
View on arXiv