ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.17007
13
6

Medical Foundation Models are Susceptible to Targeted Misinformation Attacks

29 September 2023
T. Han
S. Nebelung
Firas Khader
Tian Wang
Gustav Mueller-Franzes
Christiane Kuhl
Sebastian Forsch
Jens Kleesiek
Christoph Haarburger
Keno K. Bressem
Jakob Nikolas Kather
Daniel Truhn
    AAML
ArXivPDFHTML
Abstract

Large language models (LLMs) have broad medical knowledge and can reason about medical information across many domains, holding promising potential for diverse medical applications in the near future. In this study, we demonstrate a concerning vulnerability of LLMs in medicine. Through targeted manipulation of just 1.1% of the model's weights, we can deliberately inject an incorrect biomedical fact. The erroneous information is then propagated in the model's output, whilst its performance on other biomedical tasks remains intact. We validate our findings in a set of 1,038 incorrect biomedical facts. This peculiar susceptibility raises serious security and trustworthiness concerns for the application of LLMs in healthcare settings. It accentuates the need for robust protective measures, thorough verification mechanisms, and stringent management of access to these models, ensuring their reliable and safe use in medical practice.

View on arXiv
Comments on this paper