InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective

The Segment Anything Model (SAM), a vision foundation model, exhibits impressive zero-shot capabilities in general tasks but struggles in specialized domains. Parameter-efficient fine-tuning (PEFT) is a promising approach to unleash the potential of SAM in novel scenarios. However, existing PEFT methods for SAM neglect the domain-invariant relations encoded in the pre-trained model. To bridge this gap, we propose InfoSAM, an information-theoretic approach that enhances SAM fine-tuning by distilling and preserving its pre-trained segmentation knowledge. Specifically, we formulate the knowledge transfer process as two novel mutual information-based objectives: (i) to compress the domain-invariant relation extracted from pre-trained SAM, excluding pseudo-invariant information as possible, and (ii) to maximize mutual information between the relational knowledge learned by the teacher (pre-trained SAM) and the student (fine-tuned model). The proposed InfoSAM establishes a robust distillation framework for PEFT of SAM. Extensive experiments across diverse benchmarks validate InfoSAM's effectiveness in improving SAM family's performance on real-world tasks, demonstrating its adaptability and superiority in handling specialized scenarios.
View on arXiv@article{zhang2025_2505.21920, title={ InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective }, author={ Yuanhong Zhang and Muyao Yuan and Weizhan Zhang and Tieliang Gong and Wen Wen and Jiangyong Ying and Weijie Shi }, journal={arXiv preprint arXiv:2505.21920}, year={ 2025 } }