ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.06039
  4. Cited By
MultiModalQA: Complex Question Answering over Text, Tables and Images

MultiModalQA: Complex Question Answering over Text, Tables and Images

13 April 2021
Alon Talmor
Ori Yoran
Amnon Catav
Dan Lahav
Yizhong Wang
Akari Asai
Gabriel Ilharco
Hannaneh Hajishirzi
Jonathan Berant
    LMTD
ArXivPDFHTML

Papers citing "MultiModalQA: Complex Question Answering over Text, Tables and Images"

50 / 100 papers shown
Title
mmRAG: A Modular Benchmark for Retrieval-Augmented Generation over Text, Tables, and Knowledge Graphs
mmRAG: A Modular Benchmark for Retrieval-Augmented Generation over Text, Tables, and Knowledge Graphs
Chuan Xu
Qiaosheng Chen
Yutong Feng
Gong Cheng
RALM
3DV
VLM
36
0
0
16 May 2025
Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks
Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks
Yixin Cao
Shibo Hong
Xuzhao Li
Jiahao Ying
Yubo Ma
...
Juanzi Li
Aixin Sun
Xuanjing Huang
Tat-Seng Chua
Tianwei Zhang
ALM
ELM
86
2
0
26 Apr 2025
Representation Learning for Tabular Data: A Comprehensive Survey
Representation Learning for Tabular Data: A Comprehensive Survey
Jun-Peng Jiang
Si-Yang Liu
Hao-Run Cai
Qile Zhou
Han-Jia Ye
LMTD
46
0
0
17 Apr 2025
VLMT: Vision-Language Multimodal Transformer for Multimodal Multi-hop Question Answering
VLMT: Vision-Language Multimodal Transformer for Multimodal Multi-hop Question Answering
Qi Zhi Lim
C. Lee
K. Lim
Kalaiarasi Sonai Muthu Anbananthen
31
0
0
11 Apr 2025
FinTMMBench: Benchmarking Temporal-Aware Multi-Modal RAG in Finance
Fengbin Zhu
Junfeng Li
Liangming Pan
Luu Anh Tuan
Fuli Feng
Chao Wang
Huanbo Luan
Tat-Seng Chua
AIFin
62
0
0
07 Mar 2025
MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks
MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks
Hyeonjeong Ha
Qiusi Zhan
Jeonghwan Kim
Dimitrios Bralios
Saikrishna Sanniboina
Nanyun Peng
Kai-Wei Chang
Daniel Kang
Heng Ji
KELM
AAML
69
1
0
25 Feb 2025
OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering
OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering
Jiahao Nick Li
Zhuohao Jerry Zhang
Zhang
56
1
0
24 Feb 2025
FCMR: Robust Evaluation of Financial Cross-Modal Multi-Hop Reasoning
FCMR: Robust Evaluation of Financial Cross-Modal Multi-Hop Reasoning
Seunghee Kim
Changhyeon Kim
Taeuk Kim
LRM
91
1
0
20 Feb 2025
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation
Mohammad Mahdi Abootorabi
Amirhosein Zobeiri
Mahdi Dehghani
Mohammadali Mohammadkhani
Bardia Mohammadi
Omid Ghahroodi
M. Baghshah
Ehsaneddin Asgari
RALM
105
4
0
12 Feb 2025
MRAMG-Bench: A Comprehensive Benchmark for Advancing Multimodal Retrieval-Augmented Multimodal Generation
MRAMG-Bench: A Comprehensive Benchmark for Advancing Multimodal Retrieval-Augmented Multimodal Generation
Qinhan Yu
Zhiyou Xiao
Binghui Li
Zhengren Wang
Cheng Chen
W. Zhang
RALM
VLM
103
1
0
06 Feb 2025
Multimodal Multihop Source Retrieval for Web Question Answering
Multimodal Multihop Source Retrieval for Web Question Answering
Navya Yarrabelly
Saloni Mittal
36
0
0
07 Jan 2025
An Entailment Tree Generation Approach for Multimodal Multi-Hop Question
  Answering with Mixture-of-Experts and Iterative Feedback Mechanism
An Entailment Tree Generation Approach for Multimodal Multi-Hop Question Answering with Mixture-of-Experts and Iterative Feedback Mechanism
Qing Zhang
Haocheng Lv
Jie Liu
Z. Chen
Jianyong Duan
Hao Wang
Li He
Mingying Xv
75
1
0
08 Dec 2024
M3SciQA: A Multi-Modal Multi-Document Scientific QA Benchmark for
  Evaluating Foundation Models
M3SciQA: A Multi-Modal Multi-Document Scientific QA Benchmark for Evaluating Foundation Models
Chuhan Li
Ziyao Shangguan
Yilun Zhao
Deyuan Li
Y. Liu
Arman Cohan
32
0
0
06 Nov 2024
CT2C-QA: Multimodal Question Answering over Chinese Text, Table and
  Chart
CT2C-QA: Multimodal Question Answering over Chinese Text, Table and Chart
Bowen Zhao
Tianhao Cheng
Yuejie Zhang
Ying Cheng
Rui Feng
Xiaobo Zhang
LMTD
32
1
0
28 Oct 2024
RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping
  Language-Image Pre-training
RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training
Muhe Ding
Yang Ma
Pengda Qin
Jianlong Wu
Yuhong Li
Liqiang Nie
23
1
0
18 Oct 2024
VL-GLUE: A Suite of Fundamental yet Challenging Visuo-Linguistic
  Reasoning Tasks
VL-GLUE: A Suite of Fundamental yet Challenging Visuo-Linguistic Reasoning Tasks
Shailaja Keyur Sampat
Mutsumi Nakamura
Shankar Kailas
Kartik Aggarwal
Mandy Zhou
Yezhou Yang
Chitta Baral
MLLM
CoGe
ReLM
VLM
LRM
37
0
0
17 Oct 2024
Self-adaptive Multimodal Retrieval-Augmented Generation
Self-adaptive Multimodal Retrieval-Augmented Generation
Wenjia Zhai
VLM
42
0
0
15 Oct 2024
MRAG-Bench: Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models
MRAG-Bench: Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models
Wenbo Hu
Jia-Chen Gu
Zi-Yi Dou
Mohsen Fayyaz
Pan Lu
Kai-Wei Chang
Nanyun Peng
VLM
66
4
0
10 Oct 2024
Knowledge-Aware Reasoning over Multimodal Semi-structured Tables
Knowledge-Aware Reasoning over Multimodal Semi-structured Tables
Suyash Vardhan Mathur
J. Bafna
Kunal Kartik
Harshita Khandelwal
Manish Shrivastava
Vivek Gupta
Joey Tianyi Zhou
Dan Roth
LMTD
28
0
0
25 Aug 2024
MuRAR: A Simple and Effective Multimodal Retrieval and Answer Refinement Framework for Multimodal Question Answering
MuRAR: A Simple and Effective Multimodal Retrieval and Answer Refinement Framework for Multimodal Question Answering
Zhengyuan Zhu
Daniel Lee
Hong Zhang
Sai Sree Harsha
Loic Feujio
Akash Maharaj
Yunyao Li
22
2
0
16 Aug 2024
MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented
  Generation via Knowledge-enhanced Reranking and Noise-injected Training
MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced Reranking and Noise-injected Training
Rivik Setty
Chengjin Xu
Vinay Setty
Jian Guo
34
12
0
31 Jul 2024
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark
Tsung-Han Wu
Giscard Biamby
Jerome Quenum
Ritwik Gupta
Joseph E. Gonzalez
Trevor Darrell
David M. Chan
VLM
46
0
0
18 Jul 2024
Synthetic Multimodal Question Generation
Synthetic Multimodal Question Generation
Ian Wu
Sravan Jayanthi
Vijay Viswanathan
Simon Rosenberg
Sina Pakazad
Tongshuang Wu
Graham Neubig
47
2
0
02 Jul 2024
Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and
  Metrics for Open Domain Question Answering in the Era of Large Language
  Models
Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and Metrics for Open Domain Question Answering in the Era of Large Language Models
Akchay Srivastava
Atif Memon
ELM
45
1
0
19 Jun 2024
Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Joongwon Kim
Bhargavi Paranjape
Tushar Khot
Hannaneh Hajishirzi
LM&Ro
ELM
LLMAG
LRM
46
9
0
10 Jun 2024
FlashRAG: A Modular Toolkit for Efficient Retrieval-Augmented Generation Research
FlashRAG: A Modular Toolkit for Efficient Retrieval-Augmented Generation Research
Jiajie Jin
Yutao Zhu
Xinyu Yang
Chenghao Zhang
Zhao Cao
Chenghao Zhang
Tong Zhao
Zhao Yang
Zhicheng Dou
Ji-Rong Wen
VLM
85
47
0
22 May 2024
MileBench: Benchmarking MLLMs in Long Context
MileBench: Benchmarking MLLMs in Long Context
Dingjie Song
Shunian Chen
Guiming Hardy Chen
Fei Yu
Xiang Wan
Benyou Wang
VLM
78
34
0
29 Apr 2024
JDocQA: Japanese Document Question Answering Dataset for Generative
  Language Models
JDocQA: Japanese Document Question Answering Dataset for Generative Language Models
Eri Onami
Shuhei Kurita
Taiki Miyanishi
Taro Watanabe
27
1
0
28 Mar 2024
SnapNTell: Enhancing Entity-Centric Visual Question Answering with
  Retrieval Augmented Multimodal LLM
SnapNTell: Enhancing Entity-Centric Visual Question Answering with Retrieval Augmented Multimodal LLM
Jielin Qiu
Andrea Madotto
Zhaojiang Lin
Paul A. Crook
Yongjun Xu
Xin Luna Dong
Christos Faloutsos
Lei Li
Babak Damavandi
Seungwhan Moon
40
8
0
07 Mar 2024
On the Multi-turn Instruction Following for Conversational Web Agents
On the Multi-turn Instruction Following for Conversational Web Agents
Yang Deng
Xuan Zhang
Wenxuan Zhang
Yifei Yuan
See-Kiong Ng
Tat-Seng Chua
LLMAG
LM&Ro
31
22
0
23 Feb 2024
RJUA-MedDQA: A Multimodal Benchmark for Medical Document Question
  Answering and Clinical Reasoning
RJUA-MedDQA: A Multimodal Benchmark for Medical Document Question Answering and Clinical Reasoning
Congyun Jin
Ming Zhang
Xiaowei Ma
Yujiao Li
Yingbo Wang
...
Chenfei Chi
Xiangguo Lv
Fangzhou Li
Wei Xue
Yiran Huang
LM&MA
27
2
0
19 Feb 2024
Evaluating LLMs' Mathematical Reasoning in Financial Document Question
  Answering
Evaluating LLMs' Mathematical Reasoning in Financial Document Question Answering
Pragya Srivastava
Manuj Malik
Vivek Gupta
T. Ganu
Dan Roth
25
14
0
17 Feb 2024
Exploring Hybrid Question Answering via Program-based Prompting
Exploring Hybrid Question Answering via Program-based Prompting
Qi Shi
Han Cui
Haofeng Wang
Qingfu Zhu
Wanxiang Che
Ting Liu
35
4
0
16 Feb 2024
Asking Multimodal Clarifying Questions in Mixed-Initiative
  Conversational Search
Asking Multimodal Clarifying Questions in Mixed-Initiative Conversational Search
Yifei Yuan
Clemencia Siro
Mohammad Aliannejadi
Maarten de Rijke
Wai Lam
26
6
0
12 Feb 2024
Text-to-Image Cross-Modal Generation: A Systematic Review
Text-to-Image Cross-Modal Generation: A Systematic Review
Maciej Żelaszczyk
Jacek Mańdziuk
35
3
0
21 Jan 2024
MMToM-QA: Multimodal Theory of Mind Question Answering
MMToM-QA: Multimodal Theory of Mind Question Answering
Chuanyang Jin
Yutong Wu
Jing Cao
Jiannan Xiang
Yen-Ling Kuo
Zhiting Hu
T. Ullman
Antonio Torralba
Joshua B. Tenenbaum
Tianmin Shu
30
33
0
16 Jan 2024
DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
  Question Answering over Knowledge Base and Text
DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain Question Answering over Knowledge Base and Text
Wenting Zhao
Ye Liu
Tong Niu
Yao Wan
Philip S. Yu
Shafiq R. Joty
Yingbo Zhou
Semih Yavuz
LRM
19
6
0
31 Oct 2023
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health
  Records with Chest X-ray Images
EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images
Seongsu Bae
Daeun Kyung
Jaehee Ryu
Eunbyeol Cho
Gyubok Lee
...
Jungwoo Oh
Lei Ji
E. Chang
Tackeun Kim
Edward Choi
47
20
0
28 Oct 2023
MoqaGPT : Zero-Shot Multi-modal Open-domain Question Answering with
  Large Language Model
MoqaGPT : Zero-Shot Multi-modal Open-domain Question Answering with Large Language Model
Le Zhang
Yihong Wu
Fengran Mo
Jian-Yun Nie
Aishwarya Agrawal
MLLM
RALM
34
6
0
20 Oct 2023
Progressive Evidence Refinement for Open-domain Multimodal Retrieval
  Question Answering
Progressive Evidence Refinement for Open-domain Multimodal Retrieval Question Answering
Shuwen Yang
Anran Wu
Xingjiao Wu
Luwei Xiao
Tianlong Ma
Cheng Jin
Liang He
27
2
0
15 Oct 2023
TabLib: A Dataset of 627M Tables with Context
TabLib: A Dataset of 627M Tables with Context
Gus Eggert
Kevin Huo
Mike Biven
Justin Waugh
LMTD
31
10
0
11 Oct 2023
Localize, Retrieve and Fuse: A Generalized Framework for Free-Form
  Question Answering over Tables
Localize, Retrieve and Fuse: A Generalized Framework for Free-Form Question Answering over Tables
Wenting Zhao
Ye Liu
Yao Wan
Yibo Wang
Zhongfen Deng
Philip S. Yu
RALM
LMTD
27
7
0
20 Sep 2023
Multimodal Multi-Hop Question Answering Through a Conversation Between
  Tools and Efficiently Finetuned Large Language Models
Multimodal Multi-Hop Question Answering Through a Conversation Between Tools and Efficiently Finetuned Large Language Models
Hossein Rajabzadeh
Suyuchen Wang
Hyock Ju Kwon
Bang Liu
KELM
29
3
0
16 Sep 2023
MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering
  over Text, Tables and Images
MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images
Weihao Liu
Fangyu Lei
Tongxu Luo
Jiahe Lei
Shizhu He
Jun Zhao
Kang Liu
LMTD
32
9
0
09 Sep 2023
HopPG: Self-Iterative Program Generation for Multi-Hop Question
  Answering over Heterogeneous Knowledge
HopPG: Self-Iterative Program Generation for Multi-Hop Question Answering over Heterogeneous Knowledge
Yingyao Wang
Yongwei Zhou
Chaoqun Duan
Junwei Bao
T. Zhao
20
3
0
22 Aug 2023
Through the Lens of Core Competency: Survey on Evaluation of Large
  Language Models
Through the Lens of Core Competency: Survey on Evaluation of Large Language Models
Ziyu Zhuang
Qiguang Chen
Longxuan Ma
Mingda Li
Yi Han
Yushan Qian
Haopeng Bai
Zixian Feng
Weinan Zhang
Ting Liu
ELM
26
9
0
15 Aug 2023
Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative
  Instructions
Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions
Juncheng Li
Kaihang Pan
Zhiqi Ge
Minghe Gao
Wei Ji
Wenqiao Zhang
Tat-Seng Chua
Siliang Tang
Hanwang Zhang
Yueting Zhuang
MLLM
35
68
0
08 Aug 2023
DialogStudio: Towards Richest and Most Diverse Unified Dataset
  Collection for Conversational AI
DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI
Jianguo Zhang
Kun Qian
Zhiwei Liu
Shelby Heinecke
Rui Meng
Ye Liu
Zhou Yu
Huan Wang
Silvio Savarese
Caiming Xiong
36
22
0
19 Jul 2023
MultiQG-TI: Towards Question Generation from Multi-modal Sources
MultiQG-TI: Towards Question Generation from Multi-modal Sources
Zichao Wang
Richard Baraniuk
20
5
0
07 Jul 2023
Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Netta Madvil
Yonatan Bitton
Roy Schwartz
27
2
0
06 Jul 2023
12
Next