191
v1v2 (latest)

ENACT: Entropy-based Clustering of Attention Input for Reducing the Computational Needs of Object Detection Transformers

International Conference on Information Photonics (ICIP), 2024
Main:5 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

Transformers demonstrate competitive performance in terms of precision on the problem of vision-based object detection. However, they require considerable computational resources due to the quadratic size of the attention weights. In this work, we propose to cluster the transformer input on the basis of its entropy, due to its similarity between same object pixels. This is expected to reduce GPU usage during training, while maintaining reasonable accuracy. This idea is realized with an implemented module that is called ENtropy-based Attention Clustering for detection Transformers (ENACT), which serves as a plug-in to any multi-head self-attention based transformer network. Experiments on the COCO object detection dataset and three detection transformers demonstrate that the requirements on memory are reduced, while the detection accuracy is degraded only slightly. The code of the ENACT module is available atthis https URL.

View on arXiv
Comments on this paper