ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18297
50
0

Image-to-Text for Medical Reports Using Adaptive Co-Attention and Triple-LSTM Module

24 March 2025
Yishen Liu
Shengda Liu
Hudan Pan
    MedIm
ArXivPDFHTML
Abstract

Medical report generation requires specialized expertise that general large models often fail to accurately capture. Moreover, the inherent repetition and similarity in medical data make it difficult for models to extract meaningful features, resulting in a tendency to overfit. So in this paper, we propose a multimodal model, Co-Attention Triple-LSTM Network (CA-TriNet), a deep learning model that combines transformer architectures with a Multi-LSTM network. Its Co-Attention module synergistically links a vision transformer with a text transformer to better differentiate medical images with similarities, augmented by an adaptive weight operator to catch and amplify image labels with minor similarities. Furthermore, its Triple-LSTM module refines generated sentences using targeted image objects. Extensive evaluations over three public datasets have demonstrated that CA-TriNet outperforms state-of-the-art models in terms of comprehensive ability, even pre-trained large language models on some metrics.

View on arXiv
@article{liu2025_2503.18297,
  title={ Image-to-Text for Medical Reports Using Adaptive Co-Attention and Triple-LSTM Module },
  author={ Yishen Liu and Shengda Liu and Hudan Pan },
  journal={arXiv preprint arXiv:2503.18297},
  year={ 2025 }
}
Comments on this paper