53
9

Decomposable-Net: Scalable Low-Rank Compression for Neural Networks

Abstract

Compressing deep neural networks (DNNs) is important for real-world applications operating on resource-constrained devices. However, it is not straightforward to change the model size (i.e., computational complexity) once training and compression are completed, calling for retraining to construct models suitable for different devices. In this paper, we propose a novel method, Decomposable-Net (the network decomposable in any size), which allows flexible changes to model size without retraining. We decompose weight matrices in the DNNs via singular value decomposition and adjust ranks according to the target model size. Unlike the existing methods, (1) we propose a learning method that explicitly minimizes losses for both of full-rank and low-rank networks, which is designed not only to maintain the performance of a full-rank network but also to improve multiple low-rank networks in a single model. (2) We also provide a mathematical analysis for the scalability of the approximation error with respect to the rank in each layer. Moreover, on the basis of the analysis, (3) we introduce a simple criterion for rank selection that effectively suppresses approximation error. In experiments on image-classification tasks on CIFAR-10/100 and ImageNet datasets, Decomposable-Net yields favorable performance in a broader range of compressed models. In particular, Decomposable-Net achieves the top-1 accuracy of 73.2%73.2\% with 0.27×0.27\timesMACs on the ImageNet classification task with ResNet-50, compared to low-rank tensor (Tucker) decomposition (67.4%/0.30×67.4\% / 0.30\times) and universally slimmable networks (70.6%/0.26×70.6\% / 0.26\times).

View on arXiv
Comments on this paper