ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.08024
  4. Cited By
Explaining Image Classifiers by Counterfactual Generation

Explaining Image Classifiers by Counterfactual Generation

20 July 2018
C. Chang
Elliot Creager
Anna Goldenberg
David Duvenaud
    VLM
ArXivPDFHTML

Papers citing "Explaining Image Classifiers by Counterfactual Generation"

50 / 85 papers shown
Title
Faithful Counterfactual Visual Explanations (FCVE)
Faithful Counterfactual Visual Explanations (FCVE)
Bismillah Khan
Syed Ali Tariq
Tehseen Zia
Muhammad Ahsan
David Windridge
44
0
0
12 Jan 2025
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry
Supriya Manna
Niladri Sett
189
0
0
30 Dec 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
39
3
0
16 Jul 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
45
0
0
17 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
40
4
0
27 May 2024
Manifold-based Shapley for SAR Recognization Network Explanation
Manifold-based Shapley for SAR Recognization Network Explanation
Xuran Hu
Mingzhe Zhu
Yuanjing Liu
Zhenpeng Feng
Ljubiša Stanković
FAtt
GAN
20
3
0
06 Jan 2024
Explaining high-dimensional text classifiers
Explaining high-dimensional text classifiers
Odelia Melamed
Rich Caruana
23
0
0
22 Nov 2023
Impact of architecture on robustness and interpretability of
  multispectral deep neural networks
Impact of architecture on robustness and interpretability of multispectral deep neural networks
Charles Godfrey
Elise Bishoff
Myles Mckay
E. Byler
34
0
0
21 Sep 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
37
33
0
11 Aug 2023
Active Globally Explainable Learning for Medical Images via Class
  Association Embedding and Cyclic Adversarial Generation
Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation
Ruitao Xie
Jingbang Chen
Limai Jiang
Ru Xiao
Yi-Lun Pan
Yunpeng Cai
GAN
MedIm
27
0
0
12 Jun 2023
Reason to explain: Interactive contrastive explanations (REASONX)
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State
Salvatore Ruggieri
Franco Turini
LRM
30
1
0
29 May 2023
Rectifying Group Irregularities in Explanations for Distribution Shift
Rectifying Group Irregularities in Explanations for Distribution Shift
Adam Stein
Yinjun Wu
Eric Wong
Mayur Naik
39
1
0
25 May 2023
DEGREE: Decomposition Based Explanation For Graph Neural Networks
DEGREE: Decomposition Based Explanation For Graph Neural Networks
Qizhang Feng
Ninghao Liu
Fan Yang
Ruixiang Tang
Mengnan Du
Xia Hu
30
22
0
22 May 2023
A Lifted Bregman Formulation for the Inversion of Deep Neural Networks
A Lifted Bregman Formulation for the Inversion of Deep Neural Networks
Xiaoyu Wang
Martin Benning
36
2
0
01 Mar 2023
Neural Insights for Digital Marketing Content Design
Neural Insights for Digital Marketing Content Design
F. Kong
Yuan Li
Houssam Nassif
Tanner Fiez
Ricardo Henao
Shreya Chakrabarti
3DV
27
10
0
02 Feb 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
41
22
0
17 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
47
2
0
03 Jan 2023
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
37
16
0
16 Dec 2022
Explainability as statistical inference
Explainability as statistical inference
Hugo Senetaire
Damien Garreau
J. Frellsen
Pierre-Alexandre Mattei
FAtt
23
4
0
06 Dec 2022
OCTET: Object-aware Counterfactual Explanations
OCTET: Object-aware Counterfactual Explanations
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
AAML
29
29
0
22 Nov 2022
Clarity: an improved gradient method for producing quality visual
  counterfactual explanations
Clarity: an improved gradient method for producing quality visual counterfactual explanations
Claire Theobald
Frédéric Pennerath
Brieuc Conan-Guez
Miguel Couceiro
Amedeo Napoli
BDL
41
0
0
22 Nov 2022
Explaining Image Classifiers with Multiscale Directional Image
  Representation
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
29
4
0
22 Nov 2022
Data-Centric Debugging: mitigating model failures via targeted data
  collection
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
27
4
0
17 Nov 2022
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic,
  Complete and Sound
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Arushi Gupta
Nikunj Saunshi
Dingli Yu
Kaifeng Lyu
Sanjeev Arora
AAML
FAtt
XAI
31
6
0
05 Nov 2022
A Regularized Conditional GAN for Posterior Sampling in Image Recovery
  Problems
A Regularized Conditional GAN for Posterior Sampling in Image Recovery Problems
Matthew Bendel
Rizwan Ahmad
Philip Schniter
MedIm
37
5
0
24 Oct 2022
Diffusion Visual Counterfactual Explanations
Diffusion Visual Counterfactual Explanations
Maximilian Augustin
Valentyn Boreiko
Francesco Croce
Matthias Hein
DiffM
BDL
32
68
0
21 Oct 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
  Shapley Value
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
33
31
0
07 Aug 2022
Unit Testing for Concepts in Neural Networks
Unit Testing for Concepts in Neural Networks
Charles Lovering
Ellie Pavlick
25
28
0
28 Jul 2022
Towards Counterfactual Image Manipulation via CLIP
Towards Counterfactual Image Manipulation via CLIP
Yingchen Yu
Fangneng Zhan
Rongliang Wu
Jiahui Zhang
Shijian Lu
Miaomiao Cui
Xuansong Xie
Xiansheng Hua
Chunyan Miao
CLIP
59
30
0
06 Jul 2022
GLANCE: Global to Local Architecture-Neutral Concept-based Explanations
GLANCE: Global to Local Architecture-Neutral Concept-based Explanations
Avinash Kori
Ben Glocker
Francesca Toni
32
6
0
05 Jul 2022
Hierarchical Symbolic Reasoning in Hyperbolic Space for Deep
  Discriminative Models
Hierarchical Symbolic Reasoning in Hyperbolic Space for Deep Discriminative Models
Ainkaran Santhirasekaram
Avinash Kori
A. Rockall
Mathias Winkler
Francesca Toni
Ben Glocker
FAtt
42
4
0
05 Jul 2022
Distilling Model Failures as Directions in Latent Space
Distilling Model Failures as Directions in Latent Space
Saachi Jain
Hannah Lawrence
Ankur Moitra
A. Madry
23
90
0
29 Jun 2022
What You See is What You Classify: Black Box Attributions
What You See is What You Classify: Black Box Attributions
Steven Stalder
Nathanael Perraudin
R. Achanta
Fernando Perez-Cruz
Michele Volpi
FAtt
34
9
0
23 May 2022
Sparse Visual Counterfactual Explanations in Image Space
Sparse Visual Counterfactual Explanations in Image Space
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDL
CML
32
26
0
16 May 2022
Gradient-based Counterfactual Explanations using Tractable Probabilistic
  Models
Gradient-based Counterfactual Explanations using Tractable Probabilistic Models
Xiaoting Shao
Kristian Kersting
BDL
22
1
0
16 May 2022
Discovering and Explaining the Representation Bottleneck of Graph Neural
  Networks from Multi-order Interactions
Discovering and Explaining the Representation Bottleneck of Graph Neural Networks from Multi-order Interactions
Fang Wu
Siyuan Li
Lirong Wu
Dragomir R. Radev
Stan Z. Li
27
2
0
15 May 2022
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study
  in Hate Speech Detection
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Esma Balkir
I. Nejadgholi
Kathleen C. Fraser
S. Kiritchenko
FAtt
41
27
0
06 May 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
34
14
0
25 Apr 2022
Reliable Visualization for Deep Speaker Recognition
Reliable Visualization for Deep Speaker Recognition
Pengqi Li
Lantian Li
A. Hamdulla
Dong Wang
HAI
40
9
0
08 Apr 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
42
26
0
25 Feb 2022
Vision Checklist: Towards Testable Error Analysis of Image Models to
  Help System Designers Interrogate Model Capabilities
Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities
Xin Du
Bénédicte Legastelois
B. Ganesh
A. Rajan
Hana Chockler
Vaishak Belle
Stuart Anderson
S. Ramamoorthy
AAML
27
6
0
27 Jan 2022
Deconfounding to Explanation Evaluation in Graph Neural Networks
Deconfounding to Explanation Evaluation in Graph Neural Networks
Yingmin Wu
Xiang Wang
An Zhang
Xia Hu
Fuli Feng
Xiangnan He
Tat-Seng Chua
FAtt
CML
17
14
0
21 Jan 2022
On Causally Disentangled Representations
On Causally Disentangled Representations
Abbavaram Gowtham Reddy
Benin Godfrey L
V. Balasubramanian
OOD
CML
36
21
0
10 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
56
55
0
05 Dec 2021
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
19
43
0
17 Nov 2021
On Quantitative Evaluations of Counterfactuals
On Quantitative Evaluations of Counterfactuals
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
19
10
0
30 Oct 2021
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
A Rate-Distortion Framework for Explaining Black-box Model Decisions
A Rate-Distortion Framework for Explaining Black-box Model Decisions
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
35
15
0
12 Oct 2021
Cartoon Explanations of Image Classifiers
Cartoon Explanations of Image Classifiers
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
FAtt
38
15
0
07 Oct 2021
Designing Counterfactual Generators using Deep Model Inversion
Designing Counterfactual Generators using Deep Model Inversion
Jayaraman J. Thiagarajan
V. Narayanaswamy
Deepta Rajan
J. Liang
Akshay S. Chaudhari
A. Spanias
DiffM
20
22
0
29 Sep 2021
12
Next