Learning to Adapt to Position Bias in Vision Transformer Classifiers

How discriminative position information is for image classification depends on the data. On the one hand, the camera position is arbitrary and objects can appear anywhere in the image, arguing for translation invariance. At the same time, position information is key for exploiting capture/center bias, and scene layout, e.g.: the sky is up. We show that position bias, the level to which a dataset is more easily solved when positional information on input features is used, plays a crucial role in the performance of Vision Transformers image classifiers. To investigate, we propose Position-SHAP, a direct measure of position bias by extending SHAP to work with position embeddings. We show various levels of position bias in different datasets, and find that the optimal choice of position embedding depends on the position bias apparent in the dataset. We therefore propose Auto-PE, a single-parameter position embedding extension, which allows the position embedding to modulate its norm, enabling the unlearning of position information. Auto-PE combines with existing PEs to match or improve accuracy on classification datasets.
View on arXiv@article{bruintjes2025_2505.13137, title={ Learning to Adapt to Position Bias in Vision Transformer Classifiers }, author={ Robert-Jan Bruintjes and Jan van Gemert }, journal={arXiv preprint arXiv:2505.13137}, year={ 2025 } }