63
0

S2AFormer: Strip Self-Attention for Efficient Vision Transformer

Main:9 Pages
7 Figures
Bibliography:3 Pages
8 Tables
Abstract

Vision Transformer (ViT) has made significant advancements in computer vision, thanks to its token mixer's sophisticated ability to capture global dependencies between all tokens. However, the quadratic growth in computational demands as the number of tokens increases limits its practical efficiency. Although recent methods have combined the strengths of convolutions and self-attention to achieve better trade-offs, the expensive pairwise token affinity and complex matrix operations inherent in self-attention remain a bottleneck. To address this challenge, we propose S2AFormer, an efficient Vision Transformer architecture featuring novel Strip Self-Attention (SSA). We design simple yet effective Hybrid Perception Blocks (HPBs) to effectively integrate the local perception capabilities of CNNs with the global context modeling of Transformer's attention mechanisms. A key innovation of SSA lies in its reducing the spatial dimensions of KK and VV while compressing the channel dimensions of QQ and KK. This design significantly reduces computational overhead while preserving accuracy, striking an optimal balance between efficiency and effectiveness. We evaluate the robustness and efficiency of S2AFormer through extensive experiments on multiple vision benchmarks, including ImageNet-1k for image classification, ADE20k for semantic segmentation, and COCO for object detection and instance segmentation. Results demonstrate that S2AFormer achieves significant accuracy gains with superior efficiency in both GPU and non-GPU environments, making it a strong candidate for efficient vision Transformers.

View on arXiv
@article{xu2025_2505.22195,
  title={ S2AFormer: Strip Self-Attention for Efficient Vision Transformer },
  author={ Guoan Xu and Wenfeng Huang and Wenjing Jia and Jiamao Li and Guangwei Gao and Guo-Jun Qi },
  journal={arXiv preprint arXiv:2505.22195},
  year={ 2025 }
}
Comments on this paper