ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01022
42
0

LLM-Fusion: A Novel Multimodal Fusion Model for Accelerated Material Discovery

2 March 2025
Onur Boyar
Indra Priyadarsini
Seiji Takeda
Lisa Hamada
ArXivPDFHTML
Abstract

Discovering materials with desirable properties in an efficient way remains a significant problem in materials science. Many studies have tackled this problem by using different sets of information available about the materials. Among them, multimodal approaches have been found to be promising because of their ability to combine different sources of information. However, fusion algorithms to date remain simple, lacking a mechanism to provide a rich representation of multiple modalities. This paper presents LLM-Fusion, a novel multimodal fusion model that leverages large language models (LLMs) to integrate diverse representations, such as SMILES, SELFIES, text descriptions, and molecular fingerprints, for accurate property prediction. Our approach introduces a flexible LLM-based architecture that supports multimodal input processing and enables material property prediction with higher accuracy than traditional methods. We validate our model on two datasets across five prediction tasks and demonstrate its effectiveness compared to unimodal and naive concatenation baselines.

View on arXiv
@article{boyar2025_2503.01022,
  title={ LLM-Fusion: A Novel Multimodal Fusion Model for Accelerated Material Discovery },
  author={ Onur Boyar and Indra Priyadarsini and Seiji Takeda and Lisa Hamada },
  journal={arXiv preprint arXiv:2503.01022},
  year={ 2025 }
}
Comments on this paper