Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.00106
Cited By
Zero-Shot Visual Reasoning by Vision-Language Models: Benchmarking and Analysis
27 August 2024
Aishik Nagar
Shantanu Jaiswal
Cheston Tan
ReLM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Zero-Shot Visual Reasoning by Vision-Language Models: Benchmarking and Analysis"
5 / 5 papers shown
Title
How Well Can Vison-Language Models Understand Humans' Intention? An Open-ended Theory of Mind Question Evaluation Benchmark
Ximing Wen
Mallika Mainali
Anik Sen
42
0
0
28 Mar 2025
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
Ashwin Kalyan
ELM
ReLM
LRM
211
1,113
0
20 Sep 2022
LAVIS: A Library for Language-Vision Intelligence
Dongxu Li
Junnan Li
Hung Le
Guangsen Wang
Silvio Savarese
Guosheng Lin
VLM
131
51
0
15 Sep 2022
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
369
12,003
0
04 Mar 2022
1