ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08010
26
0
v1v2v3v4 (latest)

Vision Transformers Don't Need Trained Registers

9 June 2025
Nick Jiang
Amil Dravid
Alexei A. Efros
Yossi Gandelsman
ArXiv (abs)PDFHTML
Main:9 Pages
29 Figures
Bibliography:4 Pages
12 Tables
Appendix:13 Pages
Abstract

We investigate the mechanism underlying a previously identified phenomenon in Vision Transformers -- the emergence of high-norm tokens that lead to noisy attention maps. We observe that in multiple models (e.g., CLIP, DINOv2), a sparse set of neurons is responsible for concentrating high-norm activations on outlier tokens, leading to irregular attention patterns and degrading downstream visual processing. While the existing solution for removing these outliers involves retraining models from scratch with additional learned register tokens, we use our findings to create a training-free approach to mitigate these artifacts. By shifting the high-norm activations from our discovered register neurons into an additional untrained token, we can mimic the effect of register tokens on a model already trained without registers. We demonstrate that our method produces cleaner attention and feature maps, enhances performance over base models across multiple downstream visual tasks, and achieves results comparable to models explicitly trained with register tokens. We then extend test-time registers to off-the-shelf vision-language models to improve their interpretability. Our results suggest that test-time registers effectively take on the role of register tokens at test-time, offering a training-free solution for any pre-trained model released without them.

View on arXiv
@article{jiang2025_2506.08010,
  title={ Vision Transformers Don't Need Trained Registers },
  author={ Nick Jiang and Amil Dravid and Alexei Efros and Yossi Gandelsman },
  journal={arXiv preprint arXiv:2506.08010},
  year={ 2025 }
}
Comments on this paper