7
0

Mono-Modalizing Extremely Heterogeneous Multi-Modal Medical Image Registration

Kyobin Choo
Hyunkyung Han
Jinyeong Kim
Chanyong Yoon
Seong Jae Hwang
Main:8 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Abstract

In clinical practice, imaging modalities with functional characteristics, such as positron emission tomography (PET) and fractional anisotropy (FA), are often aligned with a structural reference (e.g., MRI, CT) for accurate interpretation or group analysis, necessitating multi-modal deformable image registration (DIR). However, due to the extreme heterogeneity of these modalities compared to standard structural scans, conventional unsupervised DIR methods struggle to learn reliable spatial mappings and often distort images. We find that the similarity metrics guiding these models fail to capture alignment between highly disparate modalities. To address this, we propose M2M-Reg (Multi-to-Mono Registration), a novel framework that trains multi-modal DIR models using only mono-modal similarity while preserving the established architectural paradigm for seamless integration into existing models. We also introduce GradCyCon, a regularizer that leverages M2M-Reg's cyclic training scheme to promote diffeomorphism. Furthermore, our framework naturally extends to a semi-supervised setting, integrating pre-aligned and unaligned pairs only, without requiring ground-truth transformations or segmentation masks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that M2M-Reg achieves up to 2x higher DSC than prior methods for PET-MRI and FA-MRI registration, highlighting its effectiveness in handling highly heterogeneous multi-modal DIR. Our code is available atthis https URL.

View on arXiv
@article{choo2025_2506.15596,
  title={ Mono-Modalizing Extremely Heterogeneous Multi-Modal Medical Image Registration },
  author={ Kyobin Choo and Hyunkyung Han and Jinyeong Kim and Chanyong Yoon and Seong Jae Hwang },
  journal={arXiv preprint arXiv:2506.15596},
  year={ 2025 }
}
Comments on this paper