ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.07451
  4. Cited By
Counterfactual Visual Explanations

Counterfactual Visual Explanations

16 April 2019
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
    CML
ArXivPDFHTML

Papers citing "Counterfactual Visual Explanations"

50 / 158 papers shown
Title
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
64
1
0
14 Apr 2025
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
50
0
0
25 Mar 2025
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Explainable Neural Networks with Guarantees: A Sparse Estimation Approach
Antoine Ledent
Peng Liu
FAtt
116
0
0
20 Feb 2025
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models
SegSub: Evaluating Robustness to Knowledge Conflicts and Hallucinations in Vision-Language Models
Peter Carragher
Nikitha Rao
Abhinand Jha
R Raghav
Kathleen M. Carley
VLM
61
0
0
19 Feb 2025
Faithful Counterfactual Visual Explanations (FCVE)
Faithful Counterfactual Visual Explanations (FCVE)
Bismillah Khan
Syed Ali Tariq
Tehseen Zia
Muhammad Ahsan
David Windridge
44
0
0
12 Jan 2025
Towards Counterfactual and Contrastive Explainability and Transparency of DCNN Image Classifiers
Towards Counterfactual and Contrastive Explainability and Transparency of DCNN Image Classifiers
Syed Ali Tariq
Tehseen Zia
Mubeen Ghafoor
AAML
62
7
0
12 Jan 2025
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Numair Sani
Daniel Malinsky
I. Shpitser
CML
86
16
0
10 Jan 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
89
0
0
23 Nov 2024
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration
Philipp Spitzer
Joshua Holstein
Katelyn Morrison
Kenneth Holstein
Gerhard Satzger
Niklas Kühl
50
3
0
19 Sep 2024
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
100
0
0
14 Sep 2024
Interpreting Outliers in Time Series Data through Decoding Autoencoder
Interpreting Outliers in Time Series Data through Decoding Autoencoder
Patrick Knab
Sascha Marton
Christian Bartelt
Robert Fuder
29
1
0
03 Sep 2024
Counterfactuals and Uncertainty-Based Explainable Paradigm for the Automated Detection and Segmentation of Renal Cysts in Computed Tomography Images: A Multi-Center Study
Counterfactuals and Uncertainty-Based Explainable Paradigm for the Automated Detection and Segmentation of Renal Cysts in Computed Tomography Images: A Multi-Center Study
Zohaib Salahuddin
A. Ibrahim
Sheng Kuang
Y. Widaatalla
R. Miclea
...
Tom Marcelissen
Patricia Zondervan
Auke Jager
Philippe Lambin
Henry C. Woodruff
MedIm
37
0
0
07 Aug 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
51
1
0
28 Jul 2024
Integrated feature analysis for deep learning interpretation and class
  activation maps
Integrated feature analysis for deep learning interpretation and class activation maps
Yanli Li
Tahereh Hassanzadeh
D. Shamonin
Monique Reijnierse
A. H. V. D. H. Mil
B. Stoel
48
0
0
01 Jul 2024
Enhancing predictive imaging biomarker discovery through treatment
  effect analysis
Enhancing predictive imaging biomarker discovery through treatment effect analysis
Shuhan Xiao
Lukas Klein
Jens Petersen
Philipp Vollmuth
Paul F. Jaeger
Klaus H. Maier-Hein
37
0
0
04 Jun 2024
Measuring Feature Dependency of Neural Networks by Collapsing Feature
  Dimensions in the Data Manifold
Measuring Feature Dependency of Neural Networks by Collapsing Feature Dimensions in the Data Manifold
Yinzhu Jin
Matthew B. Dwyer
P. T. Fletcher
MedIm
23
0
0
18 Apr 2024
Global Counterfactual Directions
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
60
5
0
18 Apr 2024
CountARFactuals -- Generating plausible model-agnostic counterfactual
  explanations with adversarial random forests
CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests
Susanne Dandl
Kristin Blesch
Timo Freiesleben
Gunnar Konig
Jan Kapar
B. Bischl
Marvin N. Wright
AAML
37
5
0
04 Apr 2024
Navigating the Structured What-If Spaces: Counterfactual Generation via
  Structured Diffusion
Navigating the Structured What-If Spaces: Counterfactual Generation via Structured Diffusion
Nishtha Madaan
Srikanta J. Bedathur
DiffM
43
0
0
21 Dec 2023
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
56
7
0
20 Dec 2023
Interpretable Knowledge Tracing via Response Influence-based
  Counterfactual Reasoning
Interpretable Knowledge Tracing via Response Influence-based Counterfactual Reasoning
Jiajun Cui
Minghe Yu
Bo Jiang
Aimin Zhou
Jianyong Wang
Wei Zhang
47
3
0
01 Dec 2023
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Chong Wang
Yuanhong Chen
Fengbei Liu
Yuyuan Liu
Davis J. McCarthy
Helen Frazer
Gustavo Carneiro
34
1
0
30 Nov 2023
ProtoArgNet: Interpretable Image Classification with Super-Prototypes
  and Argumentation [Technical Report]
ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report]
Hamed Ayoobi
Nico Potyka
Francesca Toni
46
3
0
26 Nov 2023
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Eoin M. Kenny
Eoin Delaney
Markt. Keane
36
5
0
06 Nov 2023
Overview of Class Activation Maps for Visualization Explainability
Overview of Class Activation Maps for Visualization Explainability
Anh Pham Thi Minh
HAI
FAtt
46
5
0
25 Sep 2023
Impact of architecture on robustness and interpretability of
  multispectral deep neural networks
Impact of architecture on robustness and interpretability of multispectral deep neural networks
Charles Godfrey
Elise Bishoff
Myles Mckay
E. Byler
39
0
0
21 Sep 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
39
33
0
11 Aug 2023
Do Models Explain Themselves? Counterfactual Simulatability of Natural
  Language Explanations
Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations
Yanda Chen
Ruiqi Zhong
Narutatsu Ri
Chen Zhao
He He
Jacob Steinhardt
Zhou Yu
Kathleen McKeown
LRM
36
49
0
17 Jul 2023
The future of human-centric eXplainable Artificial Intelligence (XAI) is
  not post-hoc explanations
The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Vinitra Swamy
Jibril Frej
Tanja Käser
41
14
0
01 Jul 2023
Probabilistic Concept Bottleneck Models
Probabilistic Concept Bottleneck Models
Eunji Kim
Dahuin Jung
Sangha Park
Siwon Kim
Sung-Hoon Yoon
14
65
0
02 Jun 2023
LANCE: Stress-testing Visual Models by Generating Language-guided
  Counterfactual Images
LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images
Viraj Prabhu
Sriram Yenamandra
Prithvijit Chattopadhyay
Judy Hoffman
33
38
0
30 May 2023
Choose your Data Wisely: A Framework for Semantic Counterfactuals
Choose your Data Wisely: A Framework for Semantic Counterfactuals
Edmund Dervakos
Konstantinos Thomas
Giorgos Filandrianos
Giorgos Stamou
AAML
35
6
0
28 May 2023
ML-Based Teaching Systems: A Conceptual Framework
ML-Based Teaching Systems: A Conceptual Framework
Philipp Spitzer
Niklas Kühl
Daniel Heinz
G. Satzger
35
6
0
12 May 2023
Logic for Explainable AI
Logic for Explainable AI
Adnan Darwiche
43
8
0
09 May 2023
Learning with Explanation Constraints
Learning with Explanation Constraints
Rattana Pukdee
Dylan Sam
J. Zico Kolter
Maria-Florina Balcan
Pradeep Ravikumar
FAtt
37
6
0
25 Mar 2023
Towards Learning and Explaining Indirect Causal Effects in Neural
  Networks
Towards Learning and Explaining Indirect Causal Effects in Neural Networks
Abbaavaram Gowtham Reddy
Saketh Bachu
Harsh Nilesh Pathak
Ben Godfrey
V. Balasubramanian
V. Varshaneya
Satya Narayanan Kar
CML
36
0
0
24 Mar 2023
Adversarial Counterfactual Visual Explanations
Adversarial Counterfactual Visual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
46
27
0
17 Mar 2023
Explaining Groups of Instances Counterfactually for XAI: A Use Case,
  Algorithm and User Study for Group-Counterfactuals
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Greta Warren
Markt. Keane
Christophe Guéret
Eoin Delaney
26
13
0
16 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
37
28
0
14 Mar 2023
Explaining Model Confidence Using Counterfactuals
Explaining Model Confidence Using Counterfactuals
Thao Le
Tim Miller
Ronal Singh
L. Sonenberg
21
4
0
10 Mar 2023
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Zijie J. Wang
J. W. Vaughan
R. Caruana
Duen Horng Chau
HAI
41
21
0
27 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
47
3
0
07 Feb 2023
Neural Insights for Digital Marketing Content Design
Neural Insights for Digital Marketing Content Design
F. Kong
Yuan Li
Houssam Nassif
Tanner Fiez
Ricardo Henao
Shreya Chakrabarti
3DV
32
10
0
02 Feb 2023
Interpreting Robustness Proofs of Deep Neural Networks
Interpreting Robustness Proofs of Deep Neural Networks
Debangshu Banerjee
Avaljot Singh
Gagandeep Singh
AAML
29
5
0
31 Jan 2023
Emerging Synergies in Causality and Deep Generative Models: A Survey
Emerging Synergies in Causality and Deep Generative Models: A Survey
Guanglin Zhou
Shaoan Xie
Guang-Yuan Hao
Shiming Chen
Erdun Gao
Xiwei Xu
Chen Wang
Liming Zhu
Lina Yao
Kun Zhang
AI4CE
57
11
0
29 Jan 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
43
29
0
28 Jan 2023
ExplainableFold: Understanding AlphaFold Prediction with Explainable AI
ExplainableFold: Understanding AlphaFold Prediction with Explainable AI
Juntao Tan
Yongfeng Zhang
33
6
0
27 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
52
2
0
03 Jan 2023
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
40
16
0
16 Dec 2022
This changes to that : Combining causal and non-causal explanations to
  generate disease progression in capsule endoscopy
This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy
Anuja Vats
A. Mohammed
Marius Pedersen
Nirmalie Wiratunga
MedIm
29
9
0
05 Dec 2022
1234
Next