ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.08507
  4. Cited By
Generating Visual Explanations

Generating Visual Explanations

28 March 2016
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
    VLM
    FAtt
ArXivPDFHTML

Papers citing "Generating Visual Explanations"

50 / 125 papers shown
Title
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
140
0
17 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
184
0
15 May 2021
e-ViL: A Dataset and Benchmark for Natural Language Explanations in
  Vision-Language Tasks
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Maxime Kayser
Oana-Maria Camburu
Leonard Salewski
Cornelius Emde
Virginie Do
Zeynep Akata
Thomas Lukasiewicz
VLM
26
100
0
08 May 2021
A First Look: Towards Explainable TextVQA Models via Visual and Textual
  Explanations
A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations
Varun Nagaraj Rao
Xingjian Zhen
K. Hovsepian
Mingwei Shen
37
18
0
29 Apr 2021
Revisiting The Evaluation of Class Activation Mapping for
  Explainability: A Novel Metric and Experimental Analysis
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
131
33
0
20 Apr 2021
Diagnosing Vision-and-Language Navigation: What Really Matters
Diagnosing Vision-and-Language Navigation: What Really Matters
Wanrong Zhu
Yuankai Qi
P. Narayana
Kazoo Sone
Sugato Basu
Qing Guo
Qi Wu
Miguel P. Eckstein
Wei Wang
LM&Ro
27
50
0
30 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
40
48
0
20 Mar 2021
KANDINSKYPatterns -- An experimental exploration environment for Pattern
  Analysis and Machine Intelligence
KANDINSKYPatterns -- An experimental exploration environment for Pattern Analysis and Machine Intelligence
Andreas Holzinger
Anna Saranti
Heimo Mueller
46
10
0
28 Feb 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
33
26
0
04 Oct 2020
Where is the Model Looking At?--Concentrate and Explain the Network
  Attention
Where is the Model Looking At?--Concentrate and Explain the Network Attention
Wenjia Xu
Jiuniu Wang
Yang Wang
Guangluan Xu
Wei Dai
Yirong Wu
XAI
29
17
0
29 Sep 2020
Contextual Semantic Interpretability
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
20
27
0
18 Sep 2020
Contrastive Explanations in Neural Networks
Contrastive Explanations in Neural Networks
Mohit Prabhushankar
Gukyeong Kwon
Dogancan Temel
Ghassan AlRegib
FAtt
8
33
0
01 Aug 2020
The Impact of Explanations on AI Competency Prediction in VQA
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
27
9
0
02 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
626
0
01 Jul 2020
A generalizable saliency map-based interpretation of model outcome
A generalizable saliency map-based interpretation of model outcome
Shailja Thakur
S. Fischmeister
AAML
FAtt
MILM
24
2
0
16 Jun 2020
AI Research Considerations for Human Existential Safety (ARCHES)
AI Research Considerations for Human Existential Safety (ARCHES)
Andrew Critch
David M. Krueger
30
50
0
30 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
15
17
0
05 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
A Study on Multimodal and Interactive Explanations for Visual Question
  Answering
A Study on Multimodal and Interactive Explanations for Visual Question Answering
Kamran Alipour
J. Schulze
Yi Yao
Avi Ziskind
Giedrius Burachas
32
27
0
01 Mar 2020
Learning Global Transparent Models Consistent with Local Contrastive
  Explanations
Learning Global Transparent Models Consistent with Local Contrastive Explanations
Tejaswini Pedapati
Avinash Balakrishnan
Karthikeyan Shanmugam
Amit Dhurandhar
FAtt
22
0
0
19 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
33
197
0
03 Feb 2020
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven,
  AI-Enabled Medical Imaging Analysis
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis
Yao Xie
Melody Chen
David Kao
Ge Gao
Xiang Ánthony' Chen
31
126
0
15 Jan 2020
Keeping Community in the Loop: Understanding Wikipedia Stakeholder
  Values for Machine Learning-Based Systems
Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems
C. E. Smith
Bowen Yu
Anjali Srivastava
Aaron L Halfaker
Loren G. Terveen
Haiyi Zhu
KELM
21
69
0
14 Jan 2020
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
36
50
0
16 Dec 2019
TAB-VCR: Tags and Attributes based Visual Commonsense Reasoning
  Baselines
TAB-VCR: Tags and Attributes based Visual Commonsense Reasoning Baselines
Jingxiang Lin
Unnat Jain
A. Schwing
LRM
ReLM
34
9
0
31 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
39
6,119
0
22 Oct 2019
Semantically Interpretable Activation Maps: what-where-how explanations
  within CNNs
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs
Diego Marcos
Sylvain Lobry
D. Tuia
FAtt
MILM
22
26
0
18 Sep 2019
Class Feature Pyramids for Video Explanation
Class Feature Pyramids for Video Explanation
Alexandros Stergiou
G. Kapidis
Grigorios Kalliatakis
C. Chrysoulas
R. Poppe
R. Veltkamp
FAtt
33
18
0
18 Sep 2019
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
Arjun Reddy Akula
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
24
18
0
15 Sep 2019
Neural Naturalist: Generating Fine-Grained Image Comparisons
Neural Naturalist: Generating Fine-Grained Image Comparisons
Maxwell Forbes
Christine Kaeser-Chen
Piyush Sharma
Serge J. Belongie
VLM
64
56
0
09 Sep 2019
Grid Saliency for Context Explanations of Semantic Segmentation
Grid Saliency for Context Explanations of Semantic Segmentation
Lukas Hoyer
Mauricio Muñoz
P. Katiyar
Anna Khoreva
Volker Fischer
FAtt
25
48
0
30 Jul 2019
On the Weaknesses of Reinforcement Learning for Neural Machine
  Translation
On the Weaknesses of Reinforcement Learning for Neural Machine Translation
Leshem Choshen
Lior Fox
Zohar Aizenbud
Omri Abend
17
104
0
03 Jul 2019
Explainability in Human-Agent Systems
Explainability in Human-Agent Systems
A. Rosenfeld
A. Richardson
XAI
27
203
0
17 Apr 2019
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
Yongqin Xian
Saurabh Sharma
Bernt Schiele
Zeynep Akata
GAN
VLM
36
483
0
25 Mar 2019
Generating Natural Language Explanations for Visual Question Answering
  using Scene Graphs and Visual Attention
Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention
Shalini Ghosh
Giedrius Burachas
Arijit Ray
Avi Ziskind
19
65
0
15 Feb 2019
Interactive Naming for Explaining Deep Neural Networks: A Formative
  Study
Interactive Naming for Explaining Deep Neural Networks: A Formative Study
M. Hamidi-Haines
Zhongang Qi
Alan Fern
Fuxin Li
Prasad Tadepalli
FAtt
HAI
14
11
0
18 Dec 2018
Learning to Explain with Complemental Examples
Learning to Explain with Complemental Examples
Atsushi Kanehira
Tatsuya Harada
12
40
0
04 Dec 2018
Multimodal Explanations by Predicting Counterfactuality in Videos
Multimodal Explanations by Predicting Counterfactuality in Videos
Atsushi Kanehira
Kentaro Takemoto
S. Inayoshi
Tatsuya Harada
26
35
0
04 Dec 2018
From Recognition to Cognition: Visual Commonsense Reasoning
From Recognition to Cognition: Visual Commonsense Reasoning
Rowan Zellers
Yonatan Bisk
Ali Farhadi
Yejin Choi
LRM
BDL
OCL
ReLM
58
867
0
27 Nov 2018
Semantic bottleneck for computer vision tasks
Semantic bottleneck for computer vision tasks
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
16
15
0
06 Nov 2018
Understanding the Origins of Bias in Word Embeddings
Understanding the Origins of Bias in Word Embeddings
Marc-Etienne Brunet
Colleen Alkalay-Houlihan
Ashton Anderson
R. Zemel
FaML
26
198
0
08 Oct 2018
Faithful Multimodal Explanation for Visual Question Answering
Faithful Multimodal Explanation for Visual Question Answering
Jialin Wu
Raymond J. Mooney
20
90
0
08 Sep 2018
Using Machine Learning Safely in Automotive Software: An Assessment and
  Adaption of Software Process Requirements in ISO 26262
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
Rick Salay
Krzysztof Czarnecki
25
69
0
05 Aug 2018
Grounding Visual Explanations
Grounding Visual Explanations
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
17
225
0
25 Jul 2018
Generating Counterfactual Explanations with Natural Language
Generating Counterfactual Explanations with Natural Language
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
15
99
0
26 Jun 2018
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
29
164
0
20 Jun 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
19
82
0
19 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
67
1,151
0
19 Jun 2018
Previous
123
Next