14
0

The LL^\infty Learnability of Reproducing Kernel Hilbert Spaces

Abstract

In this work, we analyze the learnability of reproducing kernel Hilbert spaces (RKHS) under the LL^\infty norm, which is critical for understanding the performance of kernel methods and random feature models in safety- and security-critical applications. Specifically, we relate the LL^\infty learnability of a RKHS to the spectrum decay of the associate kernel and both lower bounds and upper bounds of the sample complexity are established. In particular, for dot-product kernels on the sphere, we identify conditions when the LL^\infty learning can be achieved with polynomial samples. Let dd denote the input dimension and assume the kernel spectrum roughly decays as λkk1β\lambda_k\sim k^{-1-\beta} with β>0\beta>0. We prove that if β\beta is independent of the input dimension dd, then functions in the RKHS can be learned efficiently under the LL^\infty norm, i.e., the sample complexity depends polynomially on dd. In contrast, if β=1/poly(d)\beta=1/\mathrm{poly}(d), then the LL^\infty learning requires exponentially many samples.

View on arXiv
Comments on this paper