Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2411.11922
Cited By
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
18 November 2024
Cheng-Yen Yang
Hsiang-Wei Huang
Wenhao Chai
Zhongyu Jiang
Jenq-Neng Hwang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"
6 / 6 papers shown
Title
Semantic-Guided Diffusion Model for Single-Step Image Super-Resolution
Zihang Liu
Zhenyu Zhang
Hao Tang
29
0
0
11 May 2025
3D CAVLA: Leveraging Depth and 3D Context to Generalize Vision Language Action Models for Unseen Tasks
V. Bhat
Yu-Hsiang Lan
P. Krishnamurthy
Ramesh Karri
Farshad Khorrami
52
0
0
09 May 2025
MoSAM: Motion-Guided Segment Anything Model with Spatial-Temporal Memory Selection
Q. Yang
Yuan Yao
Miaomiao Cui
Liefeng Bo
VLM
61
0
0
30 Apr 2025
Caption Anything in Video: Fine-grained Object-centric Captioning via Spatiotemporal Multimodal Prompting
Yunlong Tang
Jing Bi
Chao Huang
Susan Liang
Daiki Shimada
...
Jinxi He
Liu He
Zeliang Zhang
Jiebo Luo
Chenliang Xu
37
0
0
07 Apr 2025
SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation
Junjie Jiang
Zelin Wang
Manqi Zhao
Yin Li
Dongsheng Jiang
41
0
0
06 Apr 2025
MemorySAM: Memorize Modalities and Semantics with Segment Anything Model 2 for Multi-modal Semantic Segmentation
Chenfei Liao
Xu Zheng
Yuanhuiyi Lyu
Haiwei Xue
Yihong Cao
Jiawen Wang
Kailun Yang
Xuming Hu
VLM
58
3
0
09 Mar 2025
1