ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10121
  4. Cited By
Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation

Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation

16 May 2024
Bo Zhang
Hui Ma
Jian Ding
Jian Wang 00021
Bo Xu
Hongfei Lin
    VLM
ArXivPDFHTML

Papers citing "Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation"

2 / 2 papers shown
Title
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image
  Encoders and Large Language Models
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
287
4,261
0
30 Jan 2023
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize
  Long-Tail Visual Concepts
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
296
1,084
0
17 Feb 2021
1