Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.10121
Cited By
Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation
16 May 2024
Bo Zhang
Hui Ma
Jian Ding
Jian Wang 00021
Bo Xu
Hongfei Lin
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distilling Implicit Multimodal Knowledge into Large Language Models for Zero-Resource Dialogue Generation"
2 / 2 papers shown
Title
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
287
4,261
0
30 Jan 2023
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
296
1,084
0
17 Feb 2021
1