ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16352
20
0

Disentangling and Generating Modalities for Recommendation in Missing Modality Scenarios

23 April 2025
Jiwan Kim
Hongseok Kang
Sein Kim
Kibum Kim
Chanyoung Park
ArXivPDFHTML
Abstract

Multi-modal recommender systems (MRSs) have achieved notable success in improving personalization by leveraging diverse modalities such as images, text, and audio. However, two key challenges remain insufficiently addressed: (1) Insufficient consideration of missing modality scenarios and (2) the overlooking of unique characteristics of modality features. These challenges result in significant performance degradation in realistic situations where modalities are missing. To address these issues, we propose Disentangling and Generating Modality Recommender (DGMRec), a novel framework tailored for missing modality scenarios. DGMRec disentangles modality features into general and specific modality features from an information-based perspective, enabling richer representations for recommendation. Building on this, it generates missing modality features by integrating aligned features from other modalities and leveraging user modality preferences. Extensive experiments show that DGMRec consistently outperforms state-of-the-art MRSs in challenging scenarios, including missing modalities and new item settings as well as diverse missing ratios and varying levels of missing modalities. Moreover, DGMRec's generation-based approach enables cross-modal retrieval, a task inapplicable for existing MRSs, highlighting its adaptability and potential for real-world applications. Our code is available atthis https URL.

View on arXiv
@article{kim2025_2504.16352,
  title={ Disentangling and Generating Modalities for Recommendation in Missing Modality Scenarios },
  author={ Jiwan Kim and Hongseok Kang and Sein Kim and Kibum Kim and Chanyoung Park },
  journal={arXiv preprint arXiv:2504.16352},
  year={ 2025 }
}
Comments on this paper