11
14

A deep network construction that adapts to intrinsic dimensionality beyond the domain

Abstract

We study the approximation of two-layer compositions f(x)=g(ϕ(x))f(x) = g(\phi(x)) via deep networks with ReLU activation, where ϕ\phi is a geometrically intuitive, dimensionality reducing feature map. We focus on two intuitive and practically relevant choices for ϕ\phi: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets. We achieve near optimal approximation rates, which depend only on the complexity of the dimensionality reducing map ϕ\phi rather than the ambient dimension. Since ϕ\phi encapsulates all nonlinear features that are material to the function ff, this suggests that deep nets are faithful to an intrinsic dimension governed by ff rather than the complexity of the domain of ff. In particular, the prevalent assumption of approximating functions on low-dimensional manifolds can be significantly relaxed using functions of type f(x)=g(ϕ(x))f(x) = g(\phi(x)) with ϕ\phi representing an orthogonal projection onto the same manifold.

View on arXiv
Comments on this paper