13
92

Nonlinear Approximation via Compositions

Abstract

Given a function dictionary D\cal D and an approximation budget NN+N\in\mathbb{N}^+, nonlinear approximation seeks the linear combination of the best NN terms {Tn}1nND\{T_n\}_{1\le n\le N}\subseteq{\cal D} to approximate a given function ff with the minimum approximation error\[\varepsilon_{L,f}:=\min_{\{g_n\}\subseteq{\mathbb{R}},\{T_n\}\subseteq{\cal D}}\|f(x)-\sum_{n=1}^N g_n T_n(x)\|.\]Motivated by recent success of deep learning, we propose dictionaries with functions in a form of compositions, i.e.,\[T(x)=T^{(L)}\circ T^{(L-1)}\circ\cdots\circ T^{(1)}(x)\]for all TDT\in\cal D, and implement TT using ReLU feed-forward neural networks (FNNs) with LL hidden layers. We further quantify the improvement of the best NN-term approximation rate in terms of NN when LL is increased from 11 to 22 or 33 to show the power of compositions. In the case when L>3L>3, our analysis shows that increasing LL cannot improve the approximation rate in terms of NN. In particular, for any function ff on [0,1][0,1], regardless of its smoothness and even the continuity, if ff can be approximated using a dictionary when L=1L=1 with the best NN-term approximation rate εL,f=O(Nη)\varepsilon_{L,f}={\cal O}(N^{-\eta}), we show that dictionaries with L=2L=2 can improve the best NN-term approximation rate to εL,f=O(N2η)\varepsilon_{L,f}={\cal O}(N^{-2\eta}). We also show that for H\"older continuous functions of order α\alpha on [0,1]d[0,1]^d, the application of a dictionary with L=3L=3 in nonlinear approximation can achieve an essentially tight best NN-term approximation rate εL,f=O(N2α/d)\varepsilon_{L,f}={\cal O}(N^{-2\alpha/d}). Finally, we show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating H\"older continuous functions if the number of computer cores is larger than NN in parallel computing.

View on arXiv
Comments on this paper