Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1907.07165
Cited By
Explaining Classifiers with Causal Concept Effect (CaCE)
16 July 2019
Yash Goyal
Amir Feder
Uri Shalit
Been Kim
CML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explaining Classifiers with Causal Concept Effect (CaCE)"
50 / 55 papers shown
Title
Sample-efficient Learning of Concepts with Theoretical Guarantees: from Data to Concepts without Interventions
H. Fokkema
T. Erven
Sara Magliacane
75
1
0
10 Feb 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
95
0
0
23 Nov 2024
CausalConceptTS: Causal Attributions for Time Series Classification using High Fidelity Diffusion Models
Juan Miguel Lopez Alcaraz
Nils Strodthoff
DiffM
AI4TS
CML
34
3
0
24 May 2024
Measuring Feature Dependency of Neural Networks by Collapsing Feature Dimensions in the Data Manifold
Yinzhu Jin
Matthew B. Dwyer
P. T. Fletcher
MedIm
23
0
0
18 Apr 2024
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Ángel Delgado-Panadero
Beatriz Hernández-Lorca
María Teresa García-Ordás
J. Benítez-Andrades
51
52
0
14 Feb 2024
A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia
Giovanni Monea
Maxime Peyrard
Martin Josifoski
Vishrav Chaudhary
Jason Eisner
Emre Kiciman
Hamid Palangi
Barun Patra
Robert West
KELM
61
12
0
04 Dec 2023
Interpreting Pretrained Language Models via Concept Bottlenecks
Zhen Tan
Lu Cheng
Song Wang
Yuan Bo
Wenlin Yao
Huan Liu
LRM
47
21
0
08 Nov 2023
Uncovering Unique Concept Vectors through Latent Space Decomposition
Mara Graziani
Laura Mahony
An-phi Nguyen
Henning Muller
Vincent Andrearczyk
48
4
0
13 Jul 2023
Linking a predictive model to causal effect estimation
Jiuyong Li
Lin Liu
Ziqi Xu
Ha Xuan Tran
T. Le
Jixue Liu
CML
33
0
0
10 Apr 2023
Towards Learning and Explaining Indirect Causal Effects in Neural Networks
Abbaavaram Gowtham Reddy
Saketh Bachu
Harsh Nilesh Pathak
Ben Godfrey
V. Balasubramanian
V. Varshaneya
Satya Narayanan Kar
CML
36
1
0
24 Mar 2023
CEnt: An Entropy-based Model-agnostic Explainability Framework to Contrast Classifiers' Decisions
Julia El Zini
Mohamad Mansour
M. Awad
38
1
0
19 Jan 2023
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
40
68
0
25 Dec 2022
Adapting to Latent Subgroup Shifts via Concepts and Proxies
Ibrahim M. Alabdulmohsin
Nicole Chiou
Alexander DÁmour
Arthur Gretton
Sanmi Koyejo
Matt J. Kusner
Stephen R. Pfohl
Olawale Salaudeen
Jessica Schrouff
Katherine Tsai
72
9
0
21 Dec 2022
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
42
19
0
29 Nov 2022
Latent SHAP: Toward Practical Human-Interpretable Explanations
Ron Bitton
Alon Malach
Amiel Meiseles
Satoru Momiyama
Toshinori Araki
Jun Furukawa
Yuval Elovici
A. Shabtai
FAtt
11
4
0
27 Nov 2022
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
83
35
0
28 Sep 2022
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
33
91
0
14 Sep 2022
Unit Testing for Concepts in Neural Networks
Charles Lovering
Ellie Pavlick
25
28
0
28 Jul 2022
Spatial-temporal Concept based Explanation of 3D ConvNets
Yi Ji
Yu Wang
K. Mori
Jien Kato
3DPC
FAtt
32
7
0
09 Jun 2022
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
147
188
0
31 May 2022
Clinical outcome prediction under hypothetical interventions -- a representation learning framework for counterfactual reasoning
Yikuan Li
M. Mamouei
Shishir Rao
A. Hassaine
D. Canoy
Thomas Lukasiewicz
K. Rahimi
G. Salimi-Khorshidi
OOD
CML
AI4CE
36
1
0
15 May 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
37
14
0
25 Apr 2022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
Jinbin Huang
Aditi Mishra
Bum Chul Kwon
Chris Bryan
FAtt
HAI
51
31
0
04 Apr 2022
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Wenliang Li
Judy Hoffman
Duen Horng Chau
39
2
0
30 Mar 2022
Concept Embedding Analysis: A Review
Gesina Schwalbe
34
28
0
25 Mar 2022
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
47
26
0
25 Feb 2022
Evaluation Methods and Measures for Causal Learning Algorithms
Lu Cheng
Ruocheng Guo
Raha Moraffah
Paras Sheth
K. S. Candan
Huan Liu
CML
ELM
31
51
0
07 Feb 2022
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN
Yangqiu Song
Qiulin Wang
Jiquan Pei
Yu Yang
Xiangyang Ji
CVBM
37
3
0
24 Jan 2022
A Causal Lens for Controllable Text Generation
Zhiting Hu
Erran L. Li
50
60
0
22 Jan 2022
On Causally Disentangled Representations
Abbavaram Gowtham Reddy
Benin Godfrey L
V. Balasubramanian
OOD
CML
45
21
0
10 Dec 2021
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
Aleksander Madry
KELM
188
89
0
02 Dec 2021
Matching Learned Causal Effects of Neural Networks with Domain Priors
Sai Srinivas Kancheti
Abbavaram Gowtham Reddy
V. Balasubramanian
Amit Sharma
CML
36
13
0
24 Nov 2021
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
28
7
0
22 Oct 2021
Unsupervised Causal Binary Concepts Discovery with VAE for Black-box Model Explanation
Thien Q. Tran
Kazuto Fukuchi
Youhei Akimoto
Jun Sakuma
CML
45
10
0
09 Sep 2021
Instance-wise or Class-wise? A Tale of Neighbor Shapley for Concept-based Explanation
Jiahui Li
Kun Kuang
Lin Li
Long Chen
Songyang Zhang
Jian Shao
Jun Xiao
FAtt
20
18
0
03 Sep 2021
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lio
Marco Gori
S. Melacci
FAtt
XAI
30
78
0
12 Jun 2021
What if This Modified That? Syntactic Interventions via Counterfactual Embeddings
Mycal Tucker
Peng Qian
R. Levy
30
38
0
28 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
22
88
0
11 May 2021
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
27
19
0
11 May 2021
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
Oran Lang
Yossi Gandelsman
Michal Yarom
Yoav Wald
G. Elidan
...
William T. Freeman
Phillip Isola
Amir Globerson
Michal Irani
Inbar Mosseri
GAN
50
152
0
27 Apr 2021
Model Compression for Domain Adaptation through Causal Effect Estimation
Guy Rotman
Amir Feder
Roi Reichart
CML
14
7
0
18 Jan 2021
Counterfactual Generative Networks
Axel Sauer
Andreas Geiger
OOD
BDL
CML
46
124
0
15 Jan 2021
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
36
10
0
21 Dec 2020
Concept-based model explanations for Electronic Health Records
Diana Mincu
Eric Loreaux
Shaobo Hou
Sebastien Baur
Ivan V. Protsyuk
Martin G. Seneviratne
A. Mottram
Nenad Tomašev
Alan Karthikesanlingam
Jessica Schrouff
19
28
0
03 Dec 2020
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
28
83
0
03 Dec 2020
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
Adrian Weller
25
72
0
25 Oct 2020
Debiasing Concept-based Explanations with Causal Analysis
M. T. Bahadori
David Heckerman
FAtt
CML
21
39
0
22 Jul 2020
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
45
784
0
09 Jul 2020
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
14
4
0
01 Jul 2020
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
35
73
0
24 Jun 2020
1
2
Next