ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.00566
16
3

Stochastic positional embeddings improve masked image modeling

31 July 2023
Amir Bar
Florian Bordes
Assaf Shocher
Mahmoud Assran
Pascal Vincent
Nicolas Ballas
Trevor Darrell
Amir Globerson
Yann LeCun
ArXivPDFHTML
Abstract

Masked Image Modeling (MIM) is a promising self-supervised learning approach that enables learning from unlabeled images. Despite its recent success, learning good representations through MIM remains challenging because it requires predicting the right semantic content in accurate locations. For example, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location. In this work, we propose to incorporate location uncertainty into MIM by using stochastic positional embeddings (StoP). Specifically, we condition the model on stochastic masked token positions drawn from a Gaussian distribution. StoP reduces overfitting to location features and guides the model toward learning features that are more robust to location uncertainties. Quantitatively, StoP improves downstream MIM performance on a variety of downstream tasks, including +1.7%+1.7\%+1.7% on ImageNet linear probing using ViT-B, and +2.5%+2.5\%+2.5% for ViT-H using 1%1\%1% of the data.

View on arXiv
Comments on this paper