Towards Adapting Open-Source Large Language Models for Expert-Level Clinical Note Generation

Proprietary Large Language Models (LLMs) such as GPT-4 and Gemini have demonstrated promising capabilities in clinical text summarization tasks. However, due to patient data privacy concerns and computational costs, many healthcare providers prefer using small, locally-hosted models over external generic LLMs. This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model, enabling it to generate high-quality clinical notes from outpatient patient-doctor dialogues. Our process incorporates continued pre-training, supervised fine-tuning, and reinforcement learning from both AI and human feedback. We introduced a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model. Our resulting model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians. In a blinded physician reader study, the majority (90.4%) of individual evaluations rated the notes generated by LLaMA-Clinic as "acceptable" or higher across all three criteria: real-world readiness, completeness, and accuracy. In the more challenging "Assessment and Plan" section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness than physician-authored notes (4.1/5). We highlight key considerations for future clinical note-generation tasks, emphasizing the importance of pre-defining a best-practice note format, rather than relying on LLMs to determine this for clinical practice.
View on arXiv@article{wang2025_2405.00715, title={ Towards Adapting Open-Source Large Language Models for Expert-Level Clinical Note Generation }, author={ Hanyin Wang and Chufan Gao and Bolun Liu and Qiping Xu and Guleid Hussein and Mohamad El Labban and Kingsley Iheasirim and Hariprasad Korsapati and Chuck Outcalt and Jimeng Sun }, journal={arXiv preprint arXiv:2405.00715}, year={ 2025 } }