Efficient Segment Anything with Depth-Aware Fusion and Limited Training Data
Yiming Zhou
Xuenjie Xie
Panfeng Li
Albrecht Kunz
Ahmad Osman
Xavier Maldague
- VLMMDE
Main:4 Pages
4 Figures
Bibliography:1 Pages
3 Tables
Abstract
Segment Anything Models (SAM) achieve impressive universal segmentation performance but require massive datasets (e.g., 11M images) and rely solely on RGB inputs. Recent efficient variants reduce computation but still depend on large-scale training. We propose a lightweight RGB-D fusion framework that augments EfficientViT-SAM with monocular depth priors. Depth maps are generated with a pretrained estimator and fused mid-level with RGB features through a dedicated depth encoder. Trained on only 11.2k samples (less than 0.1\% of SA-1B), our method achieves higher accuracy than EfficientViT-SAM, showing that depth cues provide strong geometric priors for segmentation.
View on arXivComments on this paper
