ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17644
40
0

Towards Prospective Medical Image Reconstruction via Knowledge-Informed Dynamic Optimal Transport

23 May 2025
Taoran Zheng
Xing Li
Yan Yang
Xiang Gu
Zongben Xu
Jian Sun
    MedIm
ArXivPDFHTML
Abstract

Medical image reconstruction from measurement data is a vital but challenging inverse problem. Deep learning approaches have achieved promising results, but often requires paired measurement and high-quality images, which is typically simulated through a forward model, i.e., retrospective reconstruction. However, training on simulated pairs commonly leads to performance degradation on real prospective data due to the retrospective-to-prospective gap caused by incomplete imaging knowledge in simulation. To address this challenge, this paper introduces imaging Knowledge-Informed Dynamic Optimal Transport (KIDOT), a novel dynamic optimal transport framework with optimality in the sense of preserving consistency with imaging physics in transport, that conceptualizes reconstruction as finding a dynamic transport path. KIDOT learns from unpaired data by modeling reconstruction as a continuous evolution path from measurements to images, guided by an imaging knowledge-informed cost function and transport equation. This dynamic and knowledge-aware approach enhances robustness and better leverages unpaired data while respecting acquisition physics. Theoretically, we demonstrate that KIDOT naturally generalizes dynamic optimal transport, ensuring its mathematical rationale and solution existence. Extensive experiments on MRI and CT reconstruction demonstrate KIDOT's superior performance.

View on arXiv
@article{zheng2025_2505.17644,
  title={ Towards Prospective Medical Image Reconstruction via Knowledge-Informed Dynamic Optimal Transport },
  author={ Taoran Zheng and Xing Li and Yan Yang and Xiang Gu and Zongben Xu and Jian Sun },
  journal={arXiv preprint arXiv:2505.17644},
  year={ 2025 }
}
Comments on this paper