ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.14159
35
380

Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks

25 January 2024
Tianhe Ren
Shilong Liu
Ailing Zeng
Jing Lin
Kunchang Li
He Cao
Jiayu Chen
Xinyu Huang
Yukang Chen
Feng Yan
Zhaoyang Zeng
Hao Zhang
Feng Li
Jie-jin Yang
Hongyang Li
Qing Jiang
Lei Zhang
    VLM
ArXivPDFHTML
Abstract

We introduce Grounded SAM, which uses Grounding DINO as an open-set object detector to combine with the segment anything model (SAM). This integration enables the detection and segmentation of any regions based on arbitrary text inputs and opens a door to connecting various vision models. As shown in Fig.1, a wide range of vision tasks can be achieved by using the versatile Grounded SAM pipeline. For example, an automatic annotation pipeline based solely on input images can be realized by incorporating models such as BLIP and Recognize Anything. Additionally, incorporating Stable-Diffusion allows for controllable image editing, while the integration of OSX facilitates promptable 3D human motion analysis. Grounded SAM also shows superior performance on open-vocabulary benchmarks, achieving 48.7 mean AP on SegInW (Segmentation in the wild) zero-shot benchmark with the combination of Grounding DINO-Base and SAM-Huge models.

View on arXiv
Comments on this paper