12
0

A Step towards Interpretable Multimodal AI Models with MultiFIX

Abstract

Real-world problems are often dependent on multiple data modalities, making multimodal fusion essential for leveraging diverse information sources. In high-stakes domains, such as in healthcare, understanding how each modality contributes to the prediction is critical to ensure trustworthy and interpretable AI models. We present MultiFIX, an interpretability-driven multimodal data fusion pipeline that explicitly engineers distinct features from different modalities and combines them to make the final prediction. Initially, only deep learning components are used to train a model from data. The black-box (deep learning) components are subsequently either explained using post-hoc methods such as Grad-CAM for images or fully replaced by interpretable blocks, namely symbolic expressions for tabular data, resulting in an explainable model. We study the use of MultiFIX using several training strategies for feature extraction and predictive modeling. Besides highlighting strengths and weaknesses of MultiFIX, experiments on a variety of synthetic datasets with varying degrees of interaction between modalities demonstrate that MultiFIX can generate multimodal models that can be used to accurately explain both the extracted features and their integration without compromising predictive performance.

View on arXiv
@article{malafaia2025_2505.11262,
  title={ A Step towards Interpretable Multimodal AI Models with MultiFIX },
  author={ Mafalda Malafaia and Thalea Schlender and Tanja Alderliesten and Peter A. N. Bosman },
  journal={arXiv preprint arXiv:2505.11262},
  year={ 2025 }
}
Comments on this paper