44
0

TAGS: A Test-Time Generalist-Specialist Framework with Retrieval-Augmented Reasoning and Verification

Main:1 Pages
2 Figures
13 Tables
Appendix:15 Pages
Abstract

Recent advances such as Chain-of-Thought prompting have significantly improved large language models (LLMs) in zero-shot medical reasoning. However, prompting-based methods often remain shallow and unstable, while fine-tuned medical LLMs suffer from poor generalization under distribution shifts and limited adaptability to unseen clinical scenarios. To address these limitations, we present TAGS, a test-time framework that combines a broadly capable generalist with a domain-specific specialist to offer complementary perspectives without any model fine-tuning or parameter updates. To support this generalist-specialist reasoning process, we introduce two auxiliary modules: a hierarchical retrieval mechanism that provides multi-scale exemplars by selecting examples based on both semantic and rationale-level similarity, and a reliability scorer that evaluates reasoning consistency to guide final answer aggregation. TAGS achieves strong performance across nine MedQA benchmarks, boosting GPT-4o accuracy by 13.8%, DeepSeek-R1 by 16.8%, and improving a vanilla 7B model from 14.1% to 23.9%. These results surpass several fine-tuned medical LLMs, without any parameter updates. The code will be available atthis https URL.

View on arXiv
@article{wu2025_2505.18283,
  title={ TAGS: A Test-Time Generalist-Specialist Framework with Retrieval-Augmented Reasoning and Verification },
  author={ Jianghao Wu and Feilong Tang and Yulong Li and Ming Hu and Haochen Xue and Shoaib Jameel and Yutong Xie and Imran Razzak },
  journal={arXiv preprint arXiv:2505.18283},
  year={ 2025 }
}
Comments on this paper