We present the design process and findings of the pre-conference workshop at the Machine Learning for Healthcare Conference (2024) entitled Red Teaming Large Language Models for Healthcare, which took place on August 15, 2024. Conference participants, comprising a mix of computational and clinical expertise, attempted to discover vulnerabilities -- realistic clinical prompts for which a large language model (LLM) outputs a response that could cause clinical harm. Red-teaming with clinicians enables the identification of LLM vulnerabilities that may not be recognised by LLM developers lacking clinical expertise. We report the vulnerabilities found, categorise them, and present the results of a replication study assessing the vulnerabilities across all LLMs provided.
View on arXiv@article{balazadeh2025_2505.00467, title={ Red Teaming Large Language Models for Healthcare }, author={ Vahid Balazadeh and Michael Cooper and David Pellow and Atousa Assadi and Jennifer Bell and Jim Fackler and Gabriel Funingana and Spencer Gable-Cook and Anirudh Gangadhar and Abhishek Jaiswal and Sumanth Kaja and Christopher Khoury and Randy Lin and Kaden McKeen and Sara Naimimohasses and Khashayar Namdar and Aviraj Newatia and Allan Pang and Anshul Pattoo and Sameer Peesapati and Diana Prepelita and Bogdana Rakova and Saba Sadatamin and Rafael Schulman and Ajay Shah and Syed Azhar Shah and Syed Ahmar Shah and Babak Taati and Balagopal Unnikrishnan and Stephanie Williams and Rahul G Krishnan }, journal={arXiv preprint arXiv:2505.00467}, year={ 2025 } }