ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.13913
  4. Cited By
Generative causal explanations of black-box classifiers

Generative causal explanations of black-box classifiers

24 June 2020
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
    CML
ArXivPDFHTML

Papers citing "Generative causal explanations of black-box classifiers"

41 / 41 papers shown
Title
Explainable Artificial Intelligence for Medical Applications: A Review
Explainable Artificial Intelligence for Medical Applications: A Review
Qiyang Sun
Alican Akman
Björn Schuller
81
6
0
15 Nov 2024
Linking Model Intervention to Causal Interpretation in Model Explanation
Linking Model Intervention to Causal Interpretation in Model Explanation
Debo Cheng
Ziqi Xu
Jiuyong Li
Lin Liu
Kui Yu
T. Le
Jixue Liu
CML
23
0
0
21 Oct 2024
Structural Causality-based Generalizable Concept Discovery Models
Structural Causality-based Generalizable Concept Discovery Models
Sanchit Sinha
Guangzhi Xiong
Aidong Zhang
CML
DRL
26
0
0
20 Oct 2024
CoLiDR: Concept Learning using Aggregated Disentangled Representations
CoLiDR: Concept Learning using Aggregated Disentangled Representations
Sanchit Sinha
Guangzhi Xiong
Aidong Zhang
29
0
0
27 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
31
3
0
16 Jul 2024
A survey on Concept-based Approaches For Model Improvement
A survey on Concept-based Approaches For Model Improvement
Avani Gupta
P. J. Narayanan
LRM
29
5
0
21 Mar 2024
DiConStruct: Causal Concept-based Explanations through Black-Box
  Distillation
DiConStruct: Causal Concept-based Explanations through Black-Box Distillation
Ricardo Moreira
Jacopo Bono
Mário Cardoso
Pedro Saleiro
Mário A. T. Figueiredo
P. Bizarro
CML
28
4
0
16 Jan 2024
Causal State Distillation for Explainable Reinforcement Learning
Causal State Distillation for Explainable Reinforcement Learning
Wenhao Lu
Xufeng Zhao
Thilo Fryen
Jae Hee Lee
Mengdi Li
S. Magg
Stefan Wermter
CML
41
2
0
30 Dec 2023
SurroCBM: Concept Bottleneck Surrogate Models for Generative Post-hoc
  Explanation
SurroCBM: Concept Bottleneck Surrogate Models for Generative Post-hoc Explanation
Bo Pan
Zhenke Liu
Yifei Zhang
Liang Zhao
27
2
0
11 Oct 2023
Learning to Receive Help: Intervention-Aware Concept Embedding Models
Learning to Receive Help: Intervention-Aware Concept Embedding Models
M. Zarlenga
Katherine M. Collins
Krishnamurthy Dvijotham
Adrian Weller
Z. Shams
M. Jamnik
19
23
0
29 Sep 2023
Towards LLM-guided Causal Explainability for Black-box Text Classifiers
Towards LLM-guided Causal Explainability for Black-box Text Classifiers
Amrita Bhattacharjee
Raha Moraffah
Joshua Garland
Huan Liu
23
33
0
23 Sep 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
34
32
0
11 Aug 2023
Statistically Significant Concept-based Explanation of Image Classifiers
  via Model Knockoffs
Statistically Significant Concept-based Explanation of Image Classifiers via Model Knockoffs
Kaiwen Xu
Kazuto Fukuchi
Youhei Akimoto
Jun Sakuma
21
2
0
27 May 2023
On Pitfalls of $\textit{RemOve-And-Retrain}$: Data Processing Inequality
  Perspective
On Pitfalls of RemOve-And-Retrain\textit{RemOve-And-Retrain}RemOve-And-Retrain: Data Processing Inequality Perspective
J. Song
Keumgang Cha
Junghoon Seo
34
2
0
26 Apr 2023
Towards Learning and Explaining Indirect Causal Effects in Neural
  Networks
Towards Learning and Explaining Indirect Causal Effects in Neural Networks
Abbaavaram Gowtham Reddy
Saketh Bachu
Harsh Nilesh Pathak
Ben Godfrey
V. Balasubramanian
V. Varshaneya
Satya Narayanan Kar
CML
26
0
0
24 Mar 2023
CEnt: An Entropy-based Model-agnostic Explainability Framework to
  Contrast Classifiers' Decisions
CEnt: An Entropy-based Model-agnostic Explainability Framework to Contrast Classifiers' Decisions
Julia El Zini
Mohamad Mansour
M. Awad
19
1
0
19 Jan 2023
CI-GNN: A Granger Causality-Inspired Graph Neural Network for
  Interpretable Brain Network-Based Psychiatric Diagnosis
CI-GNN: A Granger Causality-Inspired Graph Neural Network for Interpretable Brain Network-Based Psychiatric Diagnosis
Kaizhong Zheng
Shujian Yu
Badong Chen
CML
25
31
0
04 Jan 2023
Data-Centric Debugging: mitigating model failures via targeted data
  collection
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
14
4
0
17 Nov 2022
GLANCE: Global to Local Architecture-Neutral Concept-based Explanations
GLANCE: Global to Local Architecture-Neutral Concept-based Explanations
Avinash Kori
Ben Glocker
Francesca Toni
24
6
0
05 Jul 2022
Explanatory causal effects for model agnostic explanations
Explanatory causal effects for model agnostic explanations
Jiuyong Li
Ha Xuan Tran
T. Le
Lin Liu
Kui Yu
Jixue Liu
CML
22
1
0
23 Jun 2022
Explaining Image Classifiers Using Contrastive Counterfactuals in
  Generative Latent Spaces
Explaining Image Classifiers Using Contrastive Counterfactuals in Generative Latent Spaces
Kamran Alipour
Aditya Lahiri
Ehsan Adeli
Babak Salimi
M. Pazzani
CML
20
7
0
10 Jun 2022
Clinical outcome prediction under hypothetical interventions -- a
  representation learning framework for counterfactual reasoning
Clinical outcome prediction under hypothetical interventions -- a representation learning framework for counterfactual reasoning
Yikuan Li
M. Mamouei
Shishir Rao
A. Hassaine
D. Canoy
Thomas Lukasiewicz
K. Rahimi
G. Salimi-Khorshidi
OOD
CML
AI4CE
15
1
0
15 May 2022
medXGAN: Visual Explanations for Medical Classifiers through a
  Generative Latent Space
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space
Amil Dravid
Florian Schiffers
Boqing Gong
Aggelos K. Katsaggelos
GAN
MedIm
25
9
0
11 Apr 2022
OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
  Graph Neural Networks
OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks
Wanyu Lin
Hao Lan
Hao Wang
Baochun Li
BDL
CML
31
50
0
29 Mar 2022
Core Risk Minimization using Salient ImageNet
Core Risk Minimization using Salient ImageNet
Sahil Singla
Mazda Moayeri
S. Feizi
25
14
0
28 Mar 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
26
395
0
20 Jan 2022
On Causally Disentangled Representations
On Causally Disentangled Representations
Abbavaram Gowtham Reddy
Benin Godfrey L
V. Balasubramanian
OOD
CML
28
21
0
10 Dec 2021
Salient ImageNet: How to discover spurious features in Deep Learning?
Salient ImageNet: How to discover spurious features in Deep Learning?
Sahil Singla
S. Feizi
AAML
VLM
14
114
0
08 Oct 2021
Cartoon Explanations of Image Classifiers
Cartoon Explanations of Image Classifiers
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
FAtt
35
14
0
07 Oct 2021
Unsupervised Causal Binary Concepts Discovery with VAE for Black-box
  Model Explanation
Unsupervised Causal Binary Concepts Discovery with VAE for Black-box Model Explanation
Thien Q. Tran
Kazuto Fukuchi
Youhei Akimoto
Jun Sakuma
CML
37
10
0
09 Sep 2021
VAE-CE: Visual Contrastive Explanation using Disentangled VAEs
VAE-CE: Visual Contrastive Explanation using Disentangled VAEs
Y. Poels
Vlado Menkovski
CoGe
DRL
11
3
0
20 Aug 2021
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework
  and Survey
Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Richard Dazeley
Peter Vamplew
Francisco Cruz
24
59
0
20 Aug 2021
CARLA: A Python Library to Benchmark Algorithmic Recourse and
  Counterfactual Explanation Algorithms
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Martin Pawelczyk
Sascha Bielawski
J. V. D. Heuvel
Tobias Richter
Gjergji Kasneci
CML
11
105
0
02 Aug 2021
Explaining in Style: Training a GAN to explain a classifier in
  StyleSpace
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
Oran Lang
Yossi Gandelsman
Michal Yarom
Yoav Wald
G. Elidan
...
William T. Freeman
Phillip Isola
Amir Globerson
Michal Irani
Inbar Mosseri
GAN
35
152
0
27 Apr 2021
Instance-wise Causal Feature Selection for Model Interpretation
Instance-wise Causal Feature Selection for Model Interpretation
Pranoy Panda
Sai Srinivas Kancheti
V. Balasubramanian
CML
47
16
0
26 Apr 2021
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
  For Interpretable Vision Models
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models
Dong Wang
Yuewei Yang
Chenyang Tao
Zhe Gan
Liqun Chen
Fanjie Kong
Ricardo Henao
Lawrence Carin
24
0
0
06 Dec 2020
Understanding Failures of Deep Networks via Robust Feature Extraction
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
13
82
0
03 Dec 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
13
776
0
09 Jul 2020
Explaining Visual Models by Causal Attribution
Explaining Visual Models by Causal Attribution
Álvaro Parafita
Jordi Vitrià
CML
FAtt
62
35
0
19 Sep 2019
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
227
201
0
06 Jul 2017
1