46

Scaling medical imaging report generation with multimodal reinforcement learning

Qianchu Liu
Sheng Zhang
Guanghui Qin
Yu Gu
Ying Jin
Sam Preston
Yanbo Xu
Sid Kiblawi
Wen-wai Yim
Tim Ossowski
Tristan Naumann
Mu Wei
Hoifung Poon
Main:9 Pages
4 Figures
Bibliography:4 Pages
1 Tables
Appendix:2 Pages
Abstract

Frontier models have demonstrated remarkable capabilities in understanding and reasoning with natural-language text, but they still exhibit major competency gaps in multimodal understanding and reasoning especially in high-value verticals such as biomedicine. Medical imaging report generation is a prominent example. Supervised fine-tuning can substantially improve performance, but they are prone to overfitting to superficial boilerplate patterns. In this paper, we introduce Universal Report Generation (UniRG) as a general framework for medical imaging report generation. By leveraging reinforcement learning as a unifying mechanism to directly optimize for evaluation metrics designed for end applications, UniRG can significantly improve upon supervised fine-tuning and attain durable generalization across diverse institutions and clinical practices. We trained UniRG-CXR on publicly available chest X-ray (CXR) data and conducted a thorough evaluation in CXR report generation with rigorous evaluation scenarios. On the authoritative ReXrank benchmark, UniRG-CXR sets new overall SOTA, outperforming prior state of the art by a wide margin.

View on arXiv
Comments on this paper