Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.01016
Cited By
PV2TEA: Patching Visual Modality to Textual-Established Information Extraction
1 June 2023
Hejie Cui
Rongmei Lin
Nasser Zalmout
Chenwei Zhang
Jingbo Shang
Carl Yang
Xian Li
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PV2TEA: Patching Visual Modality to Textual-Established Information Extraction"
5 / 5 papers shown
Title
VIKSER: Visual Knowledge-Driven Self-Reinforcing Reasoning Framework
Chunbai Zhang
Chao Wang
Yang Zhou
Yan Peng
LRM
ReLM
66
0
0
02 Feb 2025
Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting
Hejie Cui
Xinyu Fang
Zihan Zhang
Ran Xu
Xuan Kan
Xin Liu
Yue Yu
Manling Li
Yangqiu Song
Carl Yang
VLM
28
4
0
28 Oct 2023
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
Guosheng Lin
MLLM
BDL
VLM
CLIP
392
4,171
0
28 Jan 2022
Zero-Shot Scene Graph Relation Prediction through Commonsense Knowledge Integration
Xuan Kan
Hejie Cui
Carl Yang
78
40
0
11 Jul 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
340
3,726
0
11 Feb 2021
1