ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.09685
  4. Cited By
Grounding Visual Explanations

Grounding Visual Explanations

25 July 2018
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
    FAtt
ArXivPDFHTML

Papers citing "Grounding Visual Explanations"

50 / 56 papers shown
Title
Extending Information Bottleneck Attribution to Video Sequences
Extending Information Bottleneck Attribution to Video Sequences
Veronika Solopova
Lucas Schmidt
Dorothea Kolossa
47
0
0
28 Jan 2025
Faithful Counterfactual Visual Explanations (FCVE)
Faithful Counterfactual Visual Explanations (FCVE)
Bismillah Khan
Syed Ali Tariq
Tehseen Zia
Muhammad Ahsan
David Windridge
44
0
0
12 Jan 2025
Towards Counterfactual and Contrastive Explainability and Transparency of DCNN Image Classifiers
Towards Counterfactual and Contrastive Explainability and Transparency of DCNN Image Classifiers
Syed Ali Tariq
Tehseen Zia
Mubeen Ghafoor
AAML
62
7
0
12 Jan 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
84
0
0
23 Nov 2024
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Chenming Shang
Hengyuan Zhang
Hao Wen
Yujiu Yang
46
5
0
13 Apr 2024
Navigating the Structured What-If Spaces: Counterfactual Generation via
  Structured Diffusion
Navigating the Structured What-If Spaces: Counterfactual Generation via Structured Diffusion
Nishtha Madaan
Srikanta J. Bedathur
DiffM
38
0
0
21 Dec 2023
Object Recognition as Next Token Prediction
Object Recognition as Next Token Prediction
Kaiyu Yue
Borchun Chen
Jonas Geiping
Hengduo Li
Tom Goldstein
Ser-Nam Lim
40
9
0
04 Dec 2023
Interpretable Reinforcement Learning for Robotics and Continuous Control
Interpretable Reinforcement Learning for Robotics and Continuous Control
Rohan R. Paleja
Letian Chen
Yaru Niu
Andrew Silva
Zhaoxin Li
...
K. Chang
H. E. Tseng
Yan Wang
S. Nageshrao
Matthew C. Gombolay
37
7
0
16 Nov 2023
Overview of Class Activation Maps for Visualization Explainability
Overview of Class Activation Maps for Visualization Explainability
Anh Pham Thi Minh
HAI
FAtt
38
5
0
25 Sep 2023
DeepMediX: A Deep Learning-Driven Resource-Efficient Medical Diagnosis
  Across the Spectrum
DeepMediX: A Deep Learning-Driven Resource-Efficient Medical Diagnosis Across the Spectrum
Kishore Babu Nampalle
Pradeep Singh
Vivek Narayan Uppala
Balasubramanian Raman
MedIm
27
2
0
01 Jul 2023
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly
  Generating Predictions and Natural Language Explanations
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations
Jesus Solano
Oana-Maria Camburu
Pasquale Minervini
20
1
0
22 May 2023
Towards Learning and Explaining Indirect Causal Effects in Neural
  Networks
Towards Learning and Explaining Indirect Causal Effects in Neural Networks
Abbaavaram Gowtham Reddy
Saketh Bachu
Harsh Nilesh Pathak
Ben Godfrey
V. Balasubramanian
V. Varshaneya
Satya Narayanan Kar
CML
31
0
0
24 Mar 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
44
2
0
03 Jan 2023
Hierarchical Explanations for Video Action Recognition
Hierarchical Explanations for Video Action Recognition
Sadaf Gulshad
Teng Long
Nanne van Noord
FAtt
29
6
0
01 Jan 2023
Language in a Bottle: Language Model Guided Concept Bottlenecks for
  Interpretable Image Classification
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Yue Yang
Artemis Panagopoulou
Shenghao Zhou
Daniel Jin
Chris Callison-Burch
Mark Yatskar
52
213
0
21 Nov 2022
Diffusion Visual Counterfactual Explanations
Diffusion Visual Counterfactual Explanations
Maximilian Augustin
Valentyn Boreiko
Francesco Croce
Matthias Hein
DiffM
BDL
32
68
0
21 Oct 2022
Prophet Attention: Predicting Attention with Future Attention for Image
  Captioning
Prophet Attention: Predicting Attention with Future Attention for Image Captioning
Fenglin Liu
Xuancheng Ren
Xian Wu
Wei Fan
Yuexian Zou
Xu Sun
24
46
0
19 Oct 2022
Improving Few-Shot Image Classification Using Machine- and
  User-Generated Natural Language Descriptions
Improving Few-Shot Image Classification Using Machine- and User-Generated Natural Language Descriptions
Kosuke Nishida
Kyosuke Nishida
Shuichi Nishioka
VLM
34
7
0
07 Jul 2022
Distilling Model Failures as Directions in Latent Space
Distilling Model Failures as Directions in Latent Space
Saachi Jain
Hannah Lawrence
Ankur Moitra
A. Madry
23
90
0
29 Jun 2022
Sparse Visual Counterfactual Explanations in Image Space
Sparse Visual Counterfactual Explanations in Image Space
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDL
CML
30
26
0
16 May 2022
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Leonard Salewski
A. Sophia Koepke
Hendrik P. A. Lensch
Zeynep Akata
LRM
NAI
33
20
0
05 Apr 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
42
25
0
25 Feb 2022
First is Better Than Last for Language Data Influence
First is Better Than Last for Language Data Influence
Chih-Kuan Yeh
Ankur Taly
Mukund Sundararajan
Frederick Liu
Pradeep Ravikumar
TDI
32
20
0
24 Feb 2022
Learning Interpretable, High-Performing Policies for Autonomous Driving
Learning Interpretable, High-Performing Policies for Autonomous Driving
Rohan R. Paleja
Yaru Niu
Andrew Silva
Chace Ritchie
Sugju Choi
Matthew C. Gombolay
24
16
0
04 Feb 2022
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
19
43
0
17 Nov 2021
Human Attention in Fine-grained Classification
Human Attention in Fine-grained Classification
Yao Rong
Wenjia Xu
Zeynep Akata
Enkelejda Kasneci
45
35
0
02 Nov 2021
Towards Out-Of-Distribution Generalization: A Survey
Towards Out-Of-Distribution Generalization: A Survey
Jiashuo Liu
Zheyan Shen
Yue He
Xingxuan Zhang
Renzhe Xu
Han Yu
Peng Cui
CML
OOD
55
517
0
31 Aug 2021
From Show to Tell: A Survey on Deep Learning-based Image Captioning
From Show to Tell: A Survey on Deep Learning-based Image Captioning
Matteo Stefanini
Marcella Cornia
Lorenzo Baraldi
S. Cascianelli
G. Fiameni
Rita Cucchiara
3DV
VLM
MLLM
67
254
0
14 Jul 2021
Probing Image-Language Transformers for Verb Understanding
Probing Image-Language Transformers for Verb Understanding
Lisa Anne Hendricks
Aida Nematzadeh
30
114
0
16 Jun 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
29
183
0
15 May 2021
A First Look: Towards Explainable TextVQA Models via Visual and Textual
  Explanations
A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations
Varun Nagaraj Rao
Xingjian Zhen
K. Hovsepian
Mingwei Shen
37
18
0
29 Apr 2021
Revisiting The Evaluation of Class Activation Mapping for
  Explainability: A Novel Metric and Experimental Analysis
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
131
33
0
20 Apr 2021
Compressing Visual-linguistic Model via Knowledge Distillation
Compressing Visual-linguistic Model via Knowledge Distillation
Zhiyuan Fang
Jianfeng Wang
Xiaowei Hu
Lijuan Wang
Yezhou Yang
Zicheng Liu
VLM
39
97
0
05 Apr 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
38
48
0
20 Mar 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
48
170
0
13 Jan 2021
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
33
119
0
27 Dec 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
33
26
0
04 Oct 2020
Where is the Model Looking At?--Concentrate and Explain the Network
  Attention
Where is the Model Looking At?--Concentrate and Explain the Network Attention
Wenjia Xu
Jiuniu Wang
Yang Wang
Guangluan Xu
Wei Dai
Yirong Wu
XAI
29
17
0
29 Sep 2020
The Impact of Explanations on AI Competency Prediction in VQA
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
27
9
0
02 Jul 2020
Counterfactual explanation of machine learning survival models
Counterfactual explanation of machine learning survival models
M. Kovalev
Lev V. Utkin
CML
OffRL
27
19
0
26 Jun 2020
Counterfactual VQA: A Cause-Effect Look at Language Bias
Counterfactual VQA: A Cause-Effect Look at Language Bias
Yulei Niu
Kaihua Tang
Hanwang Zhang
Zhiwu Lu
Xiansheng Hua
Ji-Rong Wen
CML
56
394
0
08 Jun 2020
Attentional Bottleneck: Towards an Interpretable Deep Driving Network
Attentional Bottleneck: Towards an Interpretable Deep Driving Network
Jinkyu Kim
Mayank Bansal
27
13
0
08 May 2020
SCOUT: Self-aware Discriminant Counterfactual Explanations
SCOUT: Self-aware Discriminant Counterfactual Explanations
Pei Wang
Nuno Vasconcelos
FAtt
30
81
0
16 Apr 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
98
0
20 Mar 2020
Explainable Object-induced Action Decision for Autonomous Vehicles
Explainable Object-induced Action Decision for Autonomous Vehicles
Yiran Xu
Xiaoyin Yang
Lihang Gong
Hsuan-Chu Lin
Tz-Ying Wu
Yunsheng Li
Nuno Vasconcelos
30
104
0
20 Mar 2020
A Study on Multimodal and Interactive Explanations for Visual Question
  Answering
A Study on Multimodal and Interactive Explanations for Visual Question Answering
Kamran Alipour
J. Schulze
Yi Yao
Avi Ziskind
Giedrius Burachas
32
27
0
01 Mar 2020
Evaluating Weakly Supervised Object Localization Methods Right
Evaluating Weakly Supervised Object Localization Methods Right
Junsuk Choe
Seong Joon Oh
Seungho Lee
Sanghyuk Chun
Zeynep Akata
Hyunjung Shim
WSOL
300
186
0
21 Jan 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Semantically Interpretable Activation Maps: what-where-how explanations
  within CNNs
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs
Diego Marcos
Sylvain Lobry
D. Tuia
FAtt
MILM
19
26
0
18 Sep 2019
Generative Counterfactual Introspection for Explainable Deep Learning
Generative Counterfactual Introspection for Explainable Deep Learning
Shusen Liu
B. Kailkhura
Donald Loveland
Yong Han
25
90
0
06 Jul 2019
12
Next