Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1807.08024
Cited By
Explaining Image Classifiers by Counterfactual Generation
20 July 2018
C. Chang
Elliot Creager
Anna Goldenberg
David Duvenaud
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explaining Image Classifiers by Counterfactual Generation"
35 / 85 papers shown
Title
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
24
76
0
23 Sep 2021
Deriving Explanation of Deep Visual Saliency Models
S. Malladi
J. Mukhopadhyay
M. Larabi
S. Chaudhury
FAtt
XAI
18
0
0
08 Sep 2021
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Richard Dazeley
Peter Vamplew
Francisco Cruz
32
60
0
20 Aug 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
36
24
0
29 Jul 2021
Contrastive Counterfactual Visual Explanations With Overdetermination
Adam White
K. Ngan
James Phelan
Saman Sadeghi Afgeh
Kevin Ryan
C. Reyes-Aldasoro
Artur Garcez
21
8
0
28 Jun 2021
Towards Robust Classification Model by Counterfactual and Invariant Data Generation
C. Chang
George Adam
Anna Goldenberg
OOD
CML
27
32
0
02 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODD
LRM
FAtt
29
91
0
01 Jun 2021
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
131
33
0
20 Apr 2021
Explaining the Road Not Taken
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
XAI
32
9
0
27 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
35
25
0
20 Mar 2021
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Pau Rodríguez López
Massimo Caccia
Alexandre Lacoste
L. Zamparo
I. Laradji
Laurent Charlin
David Vazquez
AAML
37
55
0
18 Mar 2021
BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation
Jungbeom Lee
Jihun Yi
Chaehun Shin
Sungroh Yoon
ISeg
24
172
0
16 Mar 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
67
100
0
11 Jan 2021
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Ricards Marcinkevics
Julia E. Vogt
XAI
28
119
0
03 Dec 2020
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
28
83
0
03 Dec 2020
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
53
243
0
21 Nov 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
33
33
0
06 Nov 2020
Interpretation of NLP models through input marginalization
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILM
FAtt
30
58
0
27 Oct 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
25
9
0
29 Sep 2020
Counterfactual Explanation and Causal Inference in Service of Robustness in Robot Control
Simón C. Smith
S. Ramamoorthy
26
13
0
18 Sep 2020
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
33
12
0
20 Aug 2020
Counterfactual Explanation Based on Gradual Construction for Deep Networks
Hong G Jung
Sin-Han Kang
Hee-Dong Kim
Dong-Ok Won
Seong-Whan Lee
OOD
FAtt
25
22
0
05 Aug 2020
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
33
37
0
13 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
30
20
0
10 Jul 2020
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Javier Antorán
Umang Bhatt
T. Adel
Adrian Weller
José Miguel Hernández-Lobato
UQCV
BDL
50
112
0
11 Jun 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
33
218
0
01 May 2020
Interpreting Medical Image Classifiers by Optimization Based Counterfactual Impact Analysis
David Major
Dimitrios Lenis
M. Wimmer
Gert Sluiter
Astrid Berg
Katja Bühler
FAtt
27
12
0
03 Apr 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
99
0
20 Mar 2020
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
56
14
0
05 Mar 2020
Explaining Visual Models by Causal Attribution
Álvaro Parafita
Jordi Vitrià
CML
FAtt
62
35
0
19 Sep 2019
Grid Saliency for Context Explanations of Semantic Segmentation
Lukas Hoyer
Mauricio Muñoz
P. Katiyar
Anna Khoreva
Volker Fischer
FAtt
25
48
0
30 Jul 2019
Image Counterfactual Sensitivity Analysis for Detecting Unintended Bias
Emily L. Denton
B. Hutchinson
Margaret Mitchell
Timnit Gebru
Andrew Zaldivar
CVBM
21
130
0
14 Jun 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
287
9,167
0
06 Jun 2015
Previous
1
2