ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22250
73
0

Modeling Challenging Patient Interactions: LLMs for Medical Communication Training

28 March 2025
Anna Bodonhelyi
Christian Stegemann-Philipps
Alessandra Sonanini
Lea Herschbach
Márton Szép
Anne Herrmann-Werner
Teresa Festl-Wietek
Enkelejda Kasneci
Friederike Holderried
    LM&MA
ArXivPDFHTML
Abstract

Effective patient communication is pivotal in healthcare, yet traditional medical training often lacks exposure to diverse, challenging interpersonal dynamics. To bridge this gap, this study proposes the use of Large Language Models (LLMs) to simulate authentic patient communication styles, specifically the "accuser" and "rationalizer" personas derived from the Satir model, while also ensuring multilingual applicability to accommodate diverse cultural contexts and enhance accessibility for medical professionals. Leveraging advanced prompt engineering, including behavioral prompts, author's notes, and stubbornness mechanisms, we developed virtual patients (VPs) that embody nuanced emotional and conversational traits. Medical professionals evaluated these VPs, rating their authenticity (accuser: 3.8±1.03.8 \pm 1.03.8±1.0; rationalizer: 3.7±0.83.7 \pm 0.83.7±0.8 on a 5-point Likert scale (from one to five)) and correctly identifying their styles. Emotion analysis revealed distinct profiles: the accuser exhibited pain, anger, and distress, while the rationalizer displayed contemplation and calmness, aligning with predefined, detailed patient description including medical history. Sentiment scores (on a scale from zero to nine) further validated these differences in the communication styles, with the accuser adopting negative (3.1±0.63.1 \pm 0.63.1±0.6) and the rationalizer more neutral (4.0±0.44.0 \pm 0.44.0±0.4) tone. These results underscore LLMs' capability to replicate complex communication styles, offering transformative potential for medical education. This approach equips trainees to navigate challenging clinical scenarios by providing realistic, adaptable patient interactions, enhancing empathy and diagnostic acumen. Our findings advocate for AI-driven tools as scalable, cost-effective solutions to cultivate nuanced communication skills, setting a foundation for future innovations in healthcare training.

View on arXiv
@article{bodonhelyi2025_2503.22250,
  title={ Modeling Challenging Patient Interactions: LLMs for Medical Communication Training },
  author={ Anna Bodonhelyi and Christian Stegemann-Philipps and Alessandra Sonanini and Lea Herschbach and Márton Szép and Anne Herrmann-Werner and Teresa Festl-Wietek and Enkelejda Kasneci and Friederike Holderried },
  journal={arXiv preprint arXiv:2503.22250},
  year={ 2025 }
}
Comments on this paper