Equivalence of approximation by convolutional neural networks and fully-connected networks

Convolutional neural networks are the most widely used type of neural networks in applications. In mathematical analysis, however, mostly fully-connected networks are studied. In this paper, we establish a connection between both network architectures. Using this connection, we show that all upper and lower bounds concerning approximation rates of {fully-connected} neural networks for functions -- for an arbitrary function class -- translate to essentially the same bounds concerning approximation rates of convolutional neural networks for functions , with the class consisting of all translation equivariant functions whose first coordinate belongs to . All presented results consider exclusively the case of convolutional neural networks without any pooling operation and with circular convolutions, i.e., not based on zero-padding.
View on arXiv