ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.07394
  4. Cited By
Where To Look: Focus Regions for Visual Question Answering

Where To Look: Focus Regions for Visual Question Answering

23 November 2015
Kevin J. Shih
Saurabh Singh
Derek Hoiem
ArXivPDFHTML

Papers citing "Where To Look: Focus Regions for Visual Question Answering"

29 / 79 papers shown
Title
Object-based reasoning in VQA
Object-based reasoning in VQA
Mikyas T. Desta
Larry Chen
Tomasz Kornuta
32
33
0
29 Jan 2018
Tell-and-Answer: Towards Explainable Visual Question Answering using
  Attributes and Captions
Tell-and-Answer: Towards Explainable Visual Question Answering using Attributes and Captions
Qing Li
Jianlong Fu
D. Yu
Tao Mei
Jiebo Luo
FAtt
XAI
CoGe
51
60
0
27 Jan 2018
Don't Just Assume; Look and Answer: Overcoming Priors for Visual
  Question Answering
Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering
Aishwarya Agrawal
Dhruv Batra
Devi Parikh
Aniruddha Kembhavi
OOD
55
582
0
01 Dec 2017
Convolutional Image Captioning
Convolutional Image Captioning
J. Aneja
Aditya Deshpande
A. Schwing
VLM
37
359
0
24 Nov 2017
Visual Question Generation as Dual Task of Visual Question Answering
Visual Question Generation as Dual Task of Visual Question Answering
Yikang Li
Nan Duan
Bolei Zhou
Xiao Chu
Wanli Ouyang
Xiaogang Wang
34
165
0
21 Sep 2017
VQS: Linking Segmentations to Questions and Answers for Supervised
  Attention in VQA and Question-Focused Semantic Segmentation
VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation
Chuang Gan
Yandong Li
Haoxiang Li
Chen Sun
Boqing Gong
27
126
0
15 Aug 2017
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for
  Visual Question Answering
Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering
Zhou Yu
Jun-chen Yu
Jianping Fan
Dacheng Tao
41
663
0
04 Aug 2017
MUTAN: Multimodal Tucker Fusion for Visual Question Answering
MUTAN: Multimodal Tucker Fusion for Visual Question Answering
H. Ben-younes
Rémi Cadène
Matthieu Cord
Nicolas Thome
67
578
0
18 May 2017
Survey of Visual Question Answering: Datasets and Techniques
Survey of Visual Question Answering: Datasets and Techniques
A. Gupta
18
38
0
10 May 2017
An Analysis of Visual Question Answering Algorithms
An Analysis of Visual Question Answering Algorithms
Kushal Kafle
Christopher Kanan
30
230
0
28 Mar 2017
VQABQ: Visual Question Answering by Basic Questions
VQABQ: Visual Question Answering by Basic Questions
Jia-Hong Huang
Modar Alfadly
Guohao Li
27
24
0
19 Mar 2017
Task-driven Visual Saliency and Attention-based Visual Question
  Answering
Task-driven Visual Saliency and Attention-based Visual Question Answering
Yuetan Lin
Zhangyang Pang
Donghui Wang
Yueting Zhuang
35
26
0
22 Feb 2017
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary
  Visual Reasoning
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
Justin Johnson
B. Hariharan
L. V. D. van der Maaten
Li Fei-Fei
C. L. Zitnick
Ross B. Girshick
CoGe
18
2,319
0
20 Dec 2016
The VQA-Machine: Learning How to Use Existing Vision Algorithms to
  Answer New Questions
The VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions
Peng Wang
Qi Wu
Chunhua Shen
Anton Van Den Hengel
OOD
28
86
0
16 Dec 2016
Attentive Explanations: Justifying Decisions and Pointing to the
  Evidence
Attentive Explanations: Justifying Decisions and Pointing to the Evidence
Dong Huk Park
Lisa Anne Hendricks
Zeynep Akata
Bernt Schiele
Trevor Darrell
Marcus Rohrbach
AAML
24
79
0
14 Dec 2016
MarioQA: Answering Questions by Watching Gameplay Videos
MarioQA: Answering Questions by Watching Gameplay Videos
Jonghwan Mun
Paul Hongsuck Seo
Ilchae Jung
Bohyung Han
44
108
0
06 Dec 2016
Making the V in VQA Matter: Elevating the Role of Image Understanding in
  Visual Question Answering
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal
Tejas Khot
D. Summers-Stay
Dhruv Batra
Devi Parikh
CoGe
110
3,126
0
02 Dec 2016
GuessWhat?! Visual object discovery through multi-modal dialogue
GuessWhat?! Visual object discovery through multi-modal dialogue
H. D. Vries
Florian Strub
A. Chandar
Olivier Pietquin
Hugo Larochelle
Aaron Courville
VLM
32
426
0
23 Nov 2016
Dual Attention Networks for Multimodal Reasoning and Matching
Dual Attention Networks for Multimodal Reasoning and Matching
Hyeonseob Nam
Jung-Woo Ha
Jeonghee Kim
34
664
0
02 Nov 2016
Mean Box Pooling: A Rich Image Representation and Output Embedding for
  the Visual Madlibs Task
Mean Box Pooling: A Rich Image Representation and Output Embedding for the Visual Madlibs Task
Ashkan Mokarian
Mateusz Malinowski
Mario Fritz
27
5
0
09 Aug 2016
Attend Refine Repeat: Active Box Proposal Generation via In-Out
  Localization
Attend Refine Repeat: Active Box Proposal Generation via In-Out Localization
Spyridon Gidaris
N. Komodakis
ObjD
24
79
0
14 Jun 2016
Adversarial Feature Learning
Adversarial Feature Learning
Jiasen Lu
Philipp Krahenbuhl
Trevor Darrell
GAN
47
1,598
0
31 May 2016
Learning Models for Actions and Person-Object Interactions with Transfer
  to Question Answering
Learning Models for Actions and Person-Object Interactions with Transfer to Question Answering
Arun Mallya
Svetlana Lazebnik
39
119
0
16 Apr 2016
Learning Visual Storylines with Skipping Recurrent Neural Networks
Learning Visual Storylines with Skipping Recurrent Neural Networks
Gunnar A. Sigurdsson
Xinlei Chen
Abhinav Gupta
23
38
0
14 Apr 2016
A Focused Dynamic Attention Model for Visual Question Answering
A Focused Dynamic Attention Model for Visual Question Answering
Ilija Ilievski
Shuicheng Yan
Jiashi Feng
22
122
0
06 Apr 2016
A Diagram Is Worth A Dozen Images
A Diagram Is Worth A Dozen Images
Aniruddha Kembhavi
M. Salvato
Eric Kolve
Minjoon Seo
Hannaneh Hajishirzi
Ali Farhadi
3DV
12
429
0
24 Mar 2016
Implicit Distortion and Fertility Models for Attention-based
  Encoder-Decoder NMT Model
Implicit Distortion and Fertility Models for Attention-based Encoder-Decoder NMT Model
Shi Feng
Shujie Liu
Mu Li
M. Zhou
32
44
0
13 Jan 2016
Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for
  Visual Question Answering
Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
Huijuan Xu
Kate Saenko
24
760
0
17 Nov 2015
MatConvNet - Convolutional Neural Networks for MATLAB
MatConvNet - Convolutional Neural Networks for MATLAB
Andrea Vedaldi
Karel Lenc
183
2,946
0
15 Dec 2014
Previous
12