82
40

Pruning Self-attentions into Convolutional Layers in Single Path

Jing Liu
Zizheng Pan
Bohan Zhuang
Abstract

Vision Transformers (ViTs) have achieved impressive performance over various computer vision tasks. However, modeling global correlations with multi-head self-attention (MSA) layers leads to two widely recognized issues: the massive computational resource consumption and the lack of intrinsic inductive bias for modeling local visual patterns like convolution. To solve both issues seamlessly, we devise a simple yet effective method named Single-Path Vision Transformer pruning (SPViT), to efficiently and automatically compress the pre-trained ViTs into compact models with proper locality added. Specifically, we first propose a novel weight-sharing scheme between MSA and convolutional operations, delivering a single-path space to encode all candidate operations. In this way, we cast the operation search problem as finding which subset of parameters to use in each MSA layer, which significantly reduces the computational cost and optimization difficulty, and the convolution kernels can be well initialized using pre-trained MSA parameters. Relying on the single-path space, we further introduce learnable binary gates to encode the operation choices, which are jointly optimized with network parameters to automatically determine the configuration of each layer. We conduct extensive experiments on two representative ViT models showing our method achieves a favorable accuracy-efficiency trade-off. For example, our SPViT achieves SOTA pruning performance by trimming 52.6% FLOPs for DeiT-B with only 0.3% top-1 accuracy loss. Code is available at https://github.com/zip-group/SPViT.

View on arXiv
Comments on this paper