The Learnability of Reproducing Kernel Hilbert Spaces

In this work, we analyze the learnability of reproducing kernel Hilbert spaces (RKHS) under the norm, which is critical for understanding the performance of kernel methods and random feature models in safety- and security-critical applications. Specifically, we relate the learnability of a RKHS to the spectrum decay of the associate kernel and both lower bounds and upper bounds of the sample complexity are established. In particular, for dot-product kernels on the sphere, we identify conditions when the learning can be achieved with polynomial samples. Let denote the input dimension and assume the kernel spectrum roughly decays as with . We prove that if is independent of the input dimension , then functions in the RKHS can be learned efficiently under the norm, i.e., the sample complexity depends polynomially on . In contrast, if , then the learning requires exponentially many samples.
View on arXiv