Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.16837
Cited By
MLLM-CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs
23 July 2024
Jihyung Kil
Zheda Mai
Justin Lee
Zihe Wang
Kerrie Cheng
Lemeng Wang
Ye Liu
A. Chowdhury
Wei-Lun Chao
CoGe
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MLLM-CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs"
5 / 5 papers shown
Title
The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs
Hong Li
Nanxi Li
Yuanjie Chen
Jianbin Zhu
Qinlu Guo
Cewu Lu
Yong-Lu Li
MLLM
39
1
0
02 Oct 2024
A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs
Zicheng Zhang
Haoning Wu
Erli Zhang
Guangtao Zhai
Weisi Lin
VLM
24
8
0
11 Feb 2024
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
320
4,279
0
30 Jan 2023
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELM
ReLM
LRM
211
1,124
0
20 Sep 2022
Neural Naturalist: Generating Fine-Grained Image Comparisons
Maxwell Forbes
Christine Kaeser-Chen
Piyush Sharma
Serge J. Belongie
VLM
64
56
0
09 Sep 2019
1