41
0

MDE: Modality Discrimination Enhancement for Multi-modal Recommendation

Abstract

Multi-modal recommendation systems aim to enhance performance by integrating an item's content features across various modalities with user behavior data. Effective utilization of features from different modalities requires addressing two challenges: preserving semantic commonality across modalities (modality-shared) and capturing unique characteristics for each modality (modality-specific). Most existing approaches focus on aligning feature spaces across modalities, which helps represent modality-shared features. However, modality-specific distinctions are often neglected, especially when there are significant semantic variations between modalities. To address this, we propose a Modality Distinctiveness Enhancement (MDE) framework that prioritizes extracting modality-specific information to improve recommendation accuracy while maintaining shared features. MDE enhances differences across modalities through a novel multi-modal fusion module and introduces a node-level trade-off mechanism to balance cross-modal alignment and differentiation. Extensive experiments on three public datasets show that our approach significantly outperforms other state-of-the-art methods, demonstrating the effectiveness of jointly considering modality-shared and modality-specific features.

View on arXiv
@article{zhou2025_2502.18481,
  title={ MDE: Modality Discrimination Enhancement for Multi-modal Recommendation },
  author={ Hang Zhou and Yucheng Wang and Huijing Zhan },
  journal={arXiv preprint arXiv:2502.18481},
  year={ 2025 }
}
Comments on this paper