31
0

A Zero-shot Learning Method Based on Large Language Models for Multi-modal Knowledge Graph Embedding

Abstract

Zero-shot learning (ZL) is crucial for tasks involving unseen categories, such as natural language processing, image classification, and cross-lingualthis http URLapplications often fail to accurately infer and handle new relations orentities involving unseen categories, severely limiting their scalability and prac-ticality in open-domain scenarios. ZL learning faces the challenge of effectivelytransferring semantic information of unseen categories in multi-modal knowledgegraph (MMKG) embedding representation learning. In this paper, we proposeZSLLM, a framework for zero-shot embedding learning of MMKGs using largelanguage models (LLMs). We leverage textual modality information of unseencategories as prompts to fully utilize the reasoning capabilities of LLMs, enablingsemantic information transfer across different modalities for unseenthis http URLmodel-based learning, the embedding representation of unseen cate-gories in MMKG is enhanced. Extensive experiments conducted on multiplereal-world datasets demonstrate the superiority of our approach compared tostate-of-the-art methods.

View on arXiv
@article{liu2025_2503.07202,
  title={ A Zero-shot Learning Method Based on Large Language Models for Multi-modal Knowledge Graph Embedding },
  author={ Bingchen Liu and Jingchen Li and Yuanyuan Fang and Xin Li },
  journal={arXiv preprint arXiv:2503.07202},
  year={ 2025 }
}
Comments on this paper