ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10121
35
1

Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation

16 May 2024
Bo Zhang
Hui Ma
Jian Ding
Jian Wang 00021
Bo Xu
Hongfei Lin
    VLM
ArXivPDFHTML
Abstract

Integrating multimodal knowledge into large language models (LLMs) represents a significant advancement in dialogue generation capabilities. However, the effective incorporation of such knowledge in zero-resource scenarios remains a substantial challenge due to the scarcity of diverse, high-quality dialogue datasets. To address this, we propose the Visual Implicit Knowledge Distillation Framework (VIKDF), an innovative approach aimed at enhancing LLMs for enriched dialogue generation in zero-resource contexts by leveraging implicit multimodal knowledge. VIKDF comprises two main stages: knowledge distillation, using an Implicit Query Transformer to extract and encode visual implicit knowledge from image-text pairs into knowledge vectors; and knowledge integration, employing a novel Bidirectional Variational Information Fusion technique to seamlessly integrate these distilled vectors into LLMs. This enables the LLMs to generate dialogues that are not only coherent and engaging but also exhibit a deep understanding of the context through implicit multimodal cues, effectively overcoming the limitations of zero-resource scenarios. Our extensive experimentation across two dialogue datasets shows that VIKDF outperforms existing state-of-the-art models in generating high-quality dialogues. The code is available atthis https URL.

View on arXiv
@article{zhang2025_2405.10121,
  title={ Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation },
  author={ Bo Zhang and Hui Ma and Jian Ding and Jian Wang and Bo Xu and Hongfei Lin },
  journal={arXiv preprint arXiv:2405.10121},
  year={ 2025 }
}
Comments on this paper