Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.12626
Cited By
Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework
24 July 2023
Jingxuan Wei
Cheng Tan
Zhangyang Gao
Linzhuang Sun
Siyuan Li
Bihui Yu
R. Guo
Stan Z. Li
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework"
5 / 5 papers shown
Title
Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning
Yibo Yan
Shen Wang
Jiahao Huo
Jingheng Ye
Zhendong Chu
Xuming Hu
Philip S. Yu
Carla P. Gomes
B. Selman
Qingsong Wen
LRM
127
9
0
05 Feb 2025
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
A. Kalyan
ELM
ReLM
LRM
211
1,105
0
20 Sep 2022
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Ekta Sood
Fabian Kögel
Florian Strohm
Prajit Dhar
Andreas Bulling
40
19
0
27 Sep 2021
Co-learning: Learning from Noisy Labels with Self-supervision
Cheng Tan
Jun-Xiong Xia
Lirong Wu
Stan Z. Li
NoLa
73
116
0
05 Aug 2021
A survey on VQA_Datasets and Approaches
Yeyun Zou
Qiyu Xie
40
18
0
02 May 2021
1