ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17900
58
1

Knowledge-enhanced Multimodal ECG Representation Learning with Arbitrary-Lead Inputs

25 February 2025
Che Liu
C. Ouyang
Zhongwei Wan
Haozhe Wang
Wenjia Bai
Rossella Arcucci
ArXivPDFHTML
Abstract

Recent advances in multimodal ECG representation learning center on aligning ECG signals with paired free-text reports. However, suboptimal alignment persists due to the complexity of medical language and the reliance on a full 12-lead setup, which is often unavailable in under-resourced settings. To tackle these issues, we propose **K-MERL**, a knowledge-enhanced multimodal ECG representation learning framework. **K-MERL** leverages large language models to extract structured knowledge from free-text reports and employs a lead-aware ECG encoder with dynamic lead masking to accommodate arbitrary lead inputs. Evaluations on six external ECG datasets show that **K-MERL** achieves state-of-the-art performance in zero-shot classification and linear probing tasks, while delivering an average **16%** AUC improvement over existing methods in partial-lead zero-shot classification.

View on arXiv
@article{liu2025_2502.17900,
  title={ Knowledge-enhanced Multimodal ECG Representation Learning with Arbitrary-Lead Inputs },
  author={ Che Liu and Cheng Ouyang and Zhongwei Wan and Haozhe Wang and Wenjia Bai and Rossella Arcucci },
  journal={arXiv preprint arXiv:2502.17900},
  year={ 2025 }
}
Comments on this paper