ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1505.00468
  4. Cited By
VQA: Visual Question Answering
v1v2v3v4v5v6v7 (latest)

VQA: Visual Question Answering

3 May 2015
Aishwarya Agrawal
Jiasen Lu
Stanislaw Antol
Margaret Mitchell
C. L. Zitnick
Dhruv Batra
Devi Parikh
    CoGe
ArXiv (abs)PDFHTML

Papers citing "VQA: Visual Question Answering"

50 / 2,957 papers shown
Title
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
Yuwei Fu
Haichao Zhang
Di Wu
Wei Xu
Benoit Boulet
VLM
123
15
0
02 Jun 2024
Video Question Answering for People with Visual Impairments Using an
  Egocentric 360-Degree Camera
Video Question Answering for People with Visual Impairments Using an Egocentric 360-Degree Camera
Inpyo Song
Minjun Joo
Joonhyung Kwon
Jangwon Lee
EgoV
91
4
0
30 May 2024
Source Code Foundation Models are Transferable Binary Analysis Knowledge
  Bases
Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases
Zian Su
Xiangzhe Xu
Ziyang Huang
Kaiyuan Zhang
Xiangyu Zhang
86
8
0
30 May 2024
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex
  Visual Reasoning
MMCTAgent: Multi-modal Critical Thinking Agent Framework for Complex Visual Reasoning
Somnath Kumar
Yash Gadhia
T. Ganu
A. Nambi
LRM
138
4
0
28 May 2024
The Evolution of Multimodal Model Architectures
The Evolution of Multimodal Model Architectures
S. Wadekar
Abhishek Chaurasia
Aman Chadha
Eugenio Culurciello
109
18
0
28 May 2024
Visual Anchors Are Strong Information Aggregators For Multimodal Large
  Language Model
Visual Anchors Are Strong Information Aggregators For Multimodal Large Language Model
Haogeng Liu
Quanzeng You
Xiaotian Han
Yongfei Liu
Huaibo Huang
Ran He
Hongxia Yang
53
3
0
28 May 2024
Vision-and-Language Navigation Generative Pretrained Transformer
Vision-and-Language Navigation Generative Pretrained Transformer
Hanlin Wen
LM&Ro
95
0
0
27 May 2024
Do Vision-Language Transformers Exhibit Visual Commonsense? An Empirical
  Study of VCR
Do Vision-Language Transformers Exhibit Visual Commonsense? An Empirical Study of VCR
Zhenyang Li
Yangyang Guo
Ke-Jyun Wang
Xiaolin Chen
Liqiang Nie
Mohan S. Kankanhalli
LRM
52
8
0
27 May 2024
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias
  Towards Vision-Language Tasks
Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language Tasks
Yunqi Zhang
Songda Li
Chunyuan Deng
Luyi Wang
Hui Zhao
121
0
0
27 May 2024
PromptFix: You Prompt and We Fix the Photo
PromptFix: You Prompt and We Fix the Photo
Yongsheng Yu
Ziyun Zeng
Hang Hua
Jianlong Fu
Jiebo Luo
MLLMDiffMVLM
88
28
0
27 May 2024
Map-based Modular Approach for Zero-shot Embodied Question Answering
Map-based Modular Approach for Zero-shot Embodied Question Answering
Koya Sakamoto
Daich Azuma
Taiki Miyanishi
Shuhei Kurita
M. Kawanabe
88
3
0
26 May 2024
DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception
DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception
Run Luo
Yunshui Li
Longze Chen
Wanwei He
Ting-En Lin
...
Zikai Song
Xiaobo Xia
Tongliang Liu
Min Yang
Binyuan Hui
VLMDiffM
188
22
0
24 May 2024
AlignGPT: Multi-modal Large Language Models with Adaptive Alignment
  Capability
AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability
Fei Zhao
Taotian Pang
Chunhui Li
Zhen Wu
Junjie Guo
Shangyu Xing
Xinyu Dai
81
7
0
23 May 2024
A Survey on Vision-Language-Action Models for Embodied AI
A Survey on Vision-Language-Action Models for Embodied AI
Yueen Ma
Zixing Song
Yuzheng Zhuang
Jianye Hao
Irwin King
LM&Ro
335
54
0
23 May 2024
PitVQA: Image-grounded Text Embedding LLM for Visual Question Answering
  in Pituitary Surgery
PitVQA: Image-grounded Text Embedding LLM for Visual Question Answering in Pituitary Surgery
Runlong He
Mengya Xu
Adrito Das
Danyal Z. Khan
Sophia Bano
Hani J. Marcus
Danail Stoyanov
Matthew J. Clarkson
Mobarakol Islam
79
9
0
22 May 2024
Like Humans to Few-Shot Learning through Knowledge Permeation of Vision
  and Text
Like Humans to Few-Shot Learning through Knowledge Permeation of Vision and Text
Yuyu Jia
Qing Zhou
Wei Huang
Junyu Gao
Qi. Wang
VLM
78
1
0
21 May 2024
Resolving Word Vagueness with Scenario-guided Adapter for Natural
  Language Inference
Resolving Word Vagueness with Scenario-guided Adapter for Natural Language Inference
Yuqi Liu
Mengyu Li
Di Liang
Ximing Li
Fausto Giunchiglia
Lan Huang
Xiaoyue Feng
Renchu Guan
64
3
0
21 May 2024
MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering
MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering
Jingqun Tang
Qi-dong Liu
Yongjie Ye
Jinghui Lu
Shubo Wei
...
Hao Liu
Xiang Bai
Can Huang
Xiang Bai
Can Huang
187
28
0
20 May 2024
MemeMQA: Multimodal Question Answering for Memes via Rationale-Based
  Inferencing
MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing
Siddhant Agarwal
Shivam Sharma
Preslav Nakov
Tanmoy Chakraborty
94
4
0
18 May 2024
Automated Multi-level Preference for MLLMs
Automated Multi-level Preference for MLLMs
Mengxi Zhang
Wenhao Wu
Yu Lu
Yuxin Song
Kang Rong
...
Jianbo Zhao
Fanglong Liu
Yifan Sun
Haocheng Feng
Jingdong Wang
MLLM
125
15
0
18 May 2024
Detecting Multimodal Situations with Insufficient Context and Abstaining from Baseless Predictions
Detecting Multimodal Situations with Insufficient Context and Abstaining from Baseless Predictions
Junzhang Liu
Zhecan Wang
Hammad A. Ayyubi
Haoxuan You
Chris Thomas
Rui Sun
Shih-Fu Chang
Kai-Wei Chang
155
0
0
18 May 2024
AudioSetMix: Enhancing Audio-Language Datasets with LLM-Assisted
  Augmentations
AudioSetMix: Enhancing Audio-Language Datasets with LLM-Assisted Augmentations
David Xu
72
2
0
17 May 2024
StackOverflowVQA: Stack Overflow Visual Question Answering Dataset
StackOverflowVQA: Stack Overflow Visual Question Answering Dataset
Motahhare Mirzaei
Mohammad Javad Pirhadi
Sauleh Eetemadi
53
0
0
17 May 2024
Fine-Tuning Large Vision-Language Models as Decision-Making Agents via
  Reinforcement Learning
Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning
Yuexiang Zhai
Hao Bai
Zipeng Lin
Jiayi Pan
Shengbang Tong
...
Alane Suhr
Saining Xie
Yann LeCun
Yi-An Ma
Sergey Levine
LLMAGLRM
139
80
0
16 May 2024
Libra: Building Decoupled Vision System on Large Language Models
Libra: Building Decoupled Vision System on Large Language Models
Yifan Xu
Xiaoshan Yang
Y. Song
Changsheng Xu
MLLMVLM
94
8
0
16 May 2024
Enhancing Semantics in Multimodal Chain of Thought via Soft Negative
  Sampling
Enhancing Semantics in Multimodal Chain of Thought via Soft Negative Sampling
Guangmin Zheng
Jin Wang
Xiaobing Zhou
Xuejie Zhang
LRM
58
2
0
16 May 2024
SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World
  Knowledge
SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge
Andong Wang
Bo Wu
Sunli Chen
Zhenfang Chen
Haotian Guan
Wei-Ning Lee
Li Erran Li
Chuang Gan
LRMRALM
103
19
0
15 May 2024
STAR: A Benchmark for Situated Reasoning in Real-World Videos
STAR: A Benchmark for Situated Reasoning in Real-World Videos
Bo Wu
Shoubin Yu
Zhenfang Chen
Joshua B. Tenenbaum
Chuang Gan
157
196
0
15 May 2024
Contextual Emotion Recognition using Large Vision Language Models
Contextual Emotion Recognition using Large Vision Language Models
Yasaman Etesam
Özge Nilay Yalçin
Chuxuan Zhang
Angelica Lim
VLM
134
4
0
14 May 2024
Incorporating Clinical Guidelines through Adapting Multi-modal Large
  Language Model for Prostate Cancer PI-RADS Scoring
Incorporating Clinical Guidelines through Adapting Multi-modal Large Language Model for Prostate Cancer PI-RADS Scoring
Tiantian Zhang
Manxi Lin
Hongda Guo
Xiaofan Zhang
Ka Fung Peter Chiu
Aasa Feragen
Qi Dou
83
2
0
14 May 2024
Realizing Visual Question Answering for Education: GPT-4V as a
  Multimodal AI
Realizing Visual Question Answering for Education: GPT-4V as a Multimodal AI
Gyeong-Geon Lee
Xiaoming Zhai
53
9
0
12 May 2024
Exploring the Capabilities of Large Multimodal Models on Dense Text
Exploring the Capabilities of Large Multimodal Models on Dense Text
Shuo Zhang
Biao Yang
Zhang Li
Zhiyin Ma
Yuliang Liu
Xiang Bai
VLM
76
11
0
09 May 2024
Universal Adversarial Perturbations for Vision-Language Pre-trained
  Models
Universal Adversarial Perturbations for Vision-Language Pre-trained Models
Pengfei Zhang
Zi Huang
Guangdong Bai
AAML
87
13
0
09 May 2024
LOC-ZSON: Language-driven Object-Centric Zero-Shot Object Retrieval and
  Navigation
LOC-ZSON: Language-driven Object-Centric Zero-Shot Object Retrieval and Navigation
Tianrui Guan
Yurou Yang
Harry Cheng
Muyuan Lin
Richard Kim
R. Madhivanan
Arnie Sen
Dinesh Manocha
LM&Ro
92
11
0
08 May 2024
THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models
THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models
Prannay Kaul
Zhizhong Li
Hao Yang
Yonatan Dukler
Ashwin Swaminathan
C. Taylor
Stefano Soatto
HILM
166
18
0
08 May 2024
WorldQA: Multimodal World Knowledge in Videos through Long-Chain
  Reasoning
WorldQA: Multimodal World Knowledge in Videos through Long-Chain Reasoning
Yuanhan Zhang
Kaichen Zhang
Yue Liu
Fanyi Pu
Christopher Arif Setiadharma
Jingkang Yang
Ziwei Liu
VGen
111
10
0
06 May 2024
iSEARLE: Improving Textual Inversion for Zero-Shot Composed Image
  Retrieval
iSEARLE: Improving Textual Inversion for Zero-Shot Composed Image Retrieval
Lorenzo Agnolucci
Alberto Baldrati
Marco Bertini
A. Bimbo
95
16
0
05 May 2024
What matters when building vision-language models?
What matters when building vision-language models?
Hugo Laurençon
Léo Tronchon
Matthieu Cord
Victor Sanh
VLM
105
177
0
03 May 2024
MANTIS: Interleaved Multi-Image Instruction Tuning
MANTIS: Interleaved Multi-Image Instruction Tuning
Dongfu Jiang
Xuan He
Huaye Zeng
Cong Wei
Max Ku
Qian Liu
Wenhu Chen
VLMMLLM
113
125
0
02 May 2024
ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question
  Answering by Understanding Vietnamese Text in Images
ViOCRVQA: Novel Benchmark Dataset and Vision Reader for Visual Question Answering by Understanding Vietnamese Text in Images
Huy Quang Pham
Thang Kien-Bao Nguyen
Quan Van Nguyen
Dan Quang Tran
Nghia Hieu Nguyen
Kiet Van Nguyen
Ngan Luu-Thuy Nguyen
97
4
0
29 Apr 2024
What Makes Multimodal In-Context Learning Work?
What Makes Multimodal In-Context Learning Work?
Folco Bertini Baldassini
Mustafa Shukor
Matthieu Cord
Laure Soulier
Benjamin Piwowarski
138
23
0
24 Apr 2024
Re-Thinking Inverse Graphics With Large Language Models
Re-Thinking Inverse Graphics With Large Language Models
Peter Kulits
Haiwen Feng
Weiyang Liu
Victoria Fernandez-Abrevaya
Michael J. Black
AI4CE
99
9
0
23 Apr 2024
Self-Bootstrapped Visual-Language Model for Knowledge Selection and
  Question Answering
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering
Dongze Hao
Qunbo Wang
Longteng Guo
Jie Jiang
Jing Liu
65
1
0
22 Apr 2024
MARVEL: Multidimensional Abstraction and Reasoning through Visual
  Evaluation and Learning
MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning
Yifan Jiang
Jiarui Zhang
Kexuan Sun
Zhivar Sourati
Kian Ahrabian
Kaixin Ma
Filip Ilievski
Jay Pujara
LRM
113
18
0
21 Apr 2024
LTOS: Layout-controllable Text-Object Synthesis via Adaptive
  Cross-attention Fusions
LTOS: Layout-controllable Text-Object Synthesis via Adaptive Cross-attention Fusions
Xiaoran Zhao
Tianhao Wu
Yu Lai
Zhiliang Tian
Zhen Huang
Yahui Liu
Zejiang He
Dongsheng Li
DiffM
114
1
0
21 Apr 2024
Exploring Diverse Methods in Visual Question Answering
Exploring Diverse Methods in Visual Question Answering
Panfeng Li
Qikai Yang
Xieming Geng
Wenjing Zhou
Zhicheng Ding
Yi Nian
115
58
0
21 Apr 2024
HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual
  Grounding
HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding
Linhui Xiao
Xiaoshan Yang
Fang Peng
Yaowei Wang
Changsheng Xu
ObjD
135
12
0
20 Apr 2024
BLINK: Multimodal Large Language Models Can See but Not Perceive
BLINK: Multimodal Large Language Models Can See but Not Perceive
Xingyu Fu
Yushi Hu
Bangzheng Li
Yu Feng
Haoyu Wang
Xudong Lin
Dan Roth
Noah A. Smith
Wei-Chiu Ma
Ranjay Krishna
VLMLRMMLLM
150
150
0
18 Apr 2024
Resilience through Scene Context in Visual Referring Expression
  Generation
Resilience through Scene Context in Visual Referring Expression Generation
Simeon Junker
Sina Zarrieß
49
1
0
18 Apr 2024
Variational Multi-Modal Hypergraph Attention Network for Multi-Modal
  Relation Extraction
Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction
Qian Li
Cheng Ji
Shu Guo
Yong Zhao
Qianren Mao
Shangguang Wang
Yuntao Wei
Jianxin Li
54
1
0
18 Apr 2024
Previous
123...101112...585960
Next