Multimodal Cancer Survival Analysis via Hypergraph Learning with Cross-Modality Rebalance

Multimodal pathology-genomic analysis has become increasingly prominent in cancer survival prediction. However, existing studies mainly utilize multi-instance learning to aggregate patch-level features, neglecting the information loss of contextual and hierarchical details within pathology images. Furthermore, the disparity in data granularity and dimensionality between pathology and genomics leads to a significant modality imbalance. The high spatial resolution inherent in pathology data renders it a dominant role while overshadowing genomics in multimodal integration. In this paper, we propose a multimodal survival prediction framework that incorporates hypergraph learning to effectively capture both contextual and hierarchical details from pathology images. Moreover, it employs a modality rebalance mechanism and an interactive alignment fusion strategy to dynamically reweight the contributions of the two modalities, thereby mitigating the pathology-genomics imbalance. Quantitative and qualitative experiments are conducted on five TCGA datasets, demonstrating that our model outperforms advanced methods by over 3.4\% in C-Index performance.
View on arXiv@article{qu2025_2505.11997, title={ Multimodal Cancer Survival Analysis via Hypergraph Learning with Cross-Modality Rebalance }, author={ Mingcheng Qu and Guang Yang and Donglin Di and Tonghua Su and Yue Gao and Yang Song and Lei Fan }, journal={arXiv preprint arXiv:2505.11997}, year={ 2025 } }