42
1

HSCR: Hierarchical Self-Contrastive Rewarding for Aligning Medical Vision Language Models

8 Figures
9 Tables
Appendix:16 Pages
Abstract

Medical Vision-Language Models (Med-VLMs) have achieved success across various tasks, yet most existing methods overlook the modality misalignment issue that can lead to untrustworthy responses in clinical settings. In this paper, we propose Hierarchical Self-Contrastive Rewarding (HSCR), a novel approach that addresses two critical challenges in Med-VLM alignment: 1) Cost-effective generation of high-quality preference data; 2) Capturing nuanced and context-aware preferences for improved alignment. HSCR first leverages the inherent capability of Med-VLMs to generate dispreferred responses with higher sampling probability. By analyzing output logit shifts after visual token dropout, we identify modality-coupled tokens that induce misalignment and derive an implicit alignment reward function. This function guides token replacement with hallucinated ones during decoding, producing high-quality dispreferred data. Furthermore, HSCR introduces a multi-level preference optimization strategy, which extends beyond traditional adjacent-level optimization by incorporating nuanced implicit preferences, leveraging relative quality in dispreferred data to capture subtle alignment cues for more precise and context-aware optimization. Extensive experiments across multiple medical tasks, including Med-VQA, medical image captioning and instruction following, demonstrate that HSCR not only enhances zero-shot performance but also significantly improves modality alignment and trustworthiness with just 2,000 training entries.

View on arXiv
@article{jiang2025_2506.00805,
  title={ HSCR: Hierarchical Self-Contrastive Rewarding for Aligning Medical Vision Language Models },
  author={ Songtao Jiang and Yan Zhang and Yeying Jin and Zhihang Tang and Yangyang Wu and Yang Feng and Jian Wu and Zuozhu Liu },
  journal={arXiv preprint arXiv:2506.00805},
  year={ 2025 }
}
Comments on this paper