ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18153
18
0

REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders

23 May 2025
Savya Khosla
Sethuraman TV
Barnett Lee
Alexander Schwing
Derek Hoiem
    VGen
ArXivPDFHTML
Abstract

We introduce the Region Encoder Network (REN), a fast and effective model for generating region-based image representations using point prompts. Recent methods combine class-agnostic segmenters (e.g., SAM) with patch-based image encoders (e.g., DINO) to produce compact and effective region representations, but they suffer from high computational cost due to the segmentation step. REN bypasses this bottleneck using a lightweight module that directly generates region tokens, enabling 60x faster token generation with 35x less memory, while also improving token quality. It uses a few cross-attention blocks that take point prompts as queries and features from a patch-based image encoder as keys and values to produce region tokens that correspond to the prompted objects. We train REN with three popular encoders-DINO, DINOv2, and OpenCLIP-and show that it can be extended to other encoders without dedicated training. We evaluate REN on semantic segmentation and retrieval tasks, where it consistently outperforms the original encoders in both performance and compactness, and matches or exceeds SAM-based region methods while being significantly faster. Notably, REN achieves state-of-the-art results on the challenging Ego4D VQ2D benchmark and outperforms proprietary LMMs on Visual Haystacks' single-needle challenge. Code and models are available at:this https URL.

View on arXiv
@article{khosla2025_2505.18153,
  title={ REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders },
  author={ Savya Khosla and Sethuraman TV and Barnett Lee and Alexander Schwing and Derek Hoiem },
  journal={arXiv preprint arXiv:2505.18153},
  year={ 2025 }
}
Comments on this paper