ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.01654
7
0

SPoT: Subpixel Placement of Tokens in Vision Transformers

2 July 2025
Martine Hjelkrem-Tan
Marius Aasan
Gabriel Y. Arteaga
Adín Ramírez Rivera
ArXiv (abs)PDFHTML
Main:7 Pages
8 Figures
Bibliography:2 Pages
8 Tables
Appendix:2 Pages
Abstract

Vision Transformers naturally accommodate sparsity, yet standard tokenization methods confine features to discrete patch grids. This constraint prevents models from fully exploiting sparse regimes, forcing awkward compromises. We propose Subpixel Placement of Tokens (SPoT), a novel tokenization strategy that positions tokens continuously within images, effectively sidestepping grid-based limitations. With our proposed oracle-guided search, we uncover substantial performance gains achievable with ideal subpixel token positioning, drastically reducing the number of tokens necessary for accurate predictions during inference. SPoT provides a new direction for flexible, efficient, and interpretable ViT architectures, redefining sparsity as a strategic advantage rather than an imposed limitation.

View on arXiv
@article{hjelkrem-tan2025_2507.01654,
  title={ SPoT: Subpixel Placement of Tokens in Vision Transformers },
  author={ Martine Hjelkrem-Tan and Marius Aasan and Gabriel Y. Arteaga and Adín Ramírez Rivera },
  journal={arXiv preprint arXiv:2507.01654},
  year={ 2025 }
}
Comments on this paper