ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.03934
  4. Cited By
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation

Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation

9 March 2020
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
    CML
    ELM
    XAI
ArXivPDFHTML

Papers citing "Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation"

35 / 85 papers shown
Title
Reinforced Causal Explainer for Graph Neural Networks
Reinforced Causal Explainer for Graph Neural Networks
Xiang Wang
Y. Wu
An Zhang
Fuli Feng
Xiangnan He
Tat-Seng Chua
CML
17
46
0
23 Apr 2022
Interpretation of Black Box NLP Models: A Survey
Interpretation of Black Box NLP Models: A Survey
Shivani Choudhary
N. Chatterjee
S. K. Saha
FAtt
32
10
0
31 Mar 2022
Cycle-Consistent Counterfactuals by Latent Transformations
Cycle-Consistent Counterfactuals by Latent Transformations
Saeed Khorram
Li Fuxin
BDL
16
32
0
28 Mar 2022
Text Transformations in Contrastive Self-Supervised Learning: A Review
Text Transformations in Contrastive Self-Supervised Learning: A Review
Amrita Bhattacharjee
Mansooreh Karami
Huan Liu
SSL
27
23
0
22 Mar 2022
Testing Granger Non-Causality in Panels with Cross-Sectional
  Dependencies
Testing Granger Non-Causality in Panels with Cross-Sectional Dependencies
Lenon Minorics
Ali Caner Türkmen
D. Kernert
Patrick Bloebaum
Laurent Callot
Dominik Janzing
9
1
0
23 Feb 2022
Evaluation Methods and Measures for Causal Learning Algorithms
Evaluation Methods and Measures for Causal Learning Algorithms
Lu Cheng
Ruocheng Guo
Raha Moraffah
Paras Sheth
K. S. Candan
Huan Liu
CML
ELM
24
50
0
07 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
26
395
0
20 Jan 2022
COIN: Counterfactual Image Generation for VQA Interpretation
COIN: Counterfactual Image Generation for VQA Interpretation
Zeyd Boukhers
Timo Hartmann
Jan Jurjens
13
7
0
10 Jan 2022
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and
  Future Opportunities
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
Waddah Saeed
C. Omlin
XAI
36
414
0
11 Nov 2021
A survey of Bayesian Network structure learning
A survey of Bayesian Network structure learning
N. K. Kitson
Anthony C. Constantinou
Zhi-gao Guo
Yang Liu
Kiattikun Chobtham
CML
24
182
0
23 Sep 2021
Counterfactual Evaluation for Explainable AI
Counterfactual Evaluation for Explainable AI
Yingqiang Ge
Shuchang Liu
Zelong Li
Shuyuan Xu
Shijie Geng
Yunqi Li
Juntao Tan
Fei Sun
Yongfeng Zhang
CML
35
13
0
05 Sep 2021
Responsible and Regulatory Conform Machine Learning for Medicine: A
  Survey of Challenges and Solutions
Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions
Eike Petersen
Yannik Potdevin
Esfandiar Mohammadi
Stephan Zidowitz
Sabrina Breyer
...
Sandra Henn
Ludwig Pechmann
M. Leucker
P. Rostalski
Christian Herzog
FaML
AILaw
OOD
27
21
0
20 Jul 2021
Robust Counterfactual Explanations on Graph Neural Networks
Robust Counterfactual Explanations on Graph Neural Networks
Mohit Bajaj
Lingyang Chu
Zihui Xue
J. Pei
Lanjun Wang
P. C. Lam
Yong Zhang
OOD
35
96
0
08 Jul 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAtt
AI4TS
23
80
0
09 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
23
137
0
17 May 2021
Causal Inference in medicine and in health policy, a summary
Causal Inference in medicine and in health policy, a summary
Wenhao Zhang
Ramin Ramezani
A. Naeim
CML
OOD
17
6
0
10 May 2021
Causal Learning for Socially Responsible AI
Causal Learning for Socially Responsible AI
Lu Cheng
Ahmadreza Mosallanezhad
Paras Sheth
Huan Liu
71
13
0
25 Apr 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
15
315
0
19 Mar 2021
Explainable Artificial Intelligence Approaches: A Survey
Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Mohiuddin Ahmed
XAI
21
103
0
23 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
37
169
0
13 Jan 2021
Outcome-Explorer: A Causality Guided Interactive Visual Interface for
  Interpretable Algorithmic Decision Making
Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making
Md. Naimul Hoque
Klaus Mueller
CML
51
30
0
03 Jan 2021
Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
Lu Cheng
Kush R. Varshney
Huan Liu
FaML
22
145
0
01 Jan 2021
Comprehensible Counterfactual Explanation on Kolmogorov-Smirnov Test
Comprehensible Counterfactual Explanation on Kolmogorov-Smirnov Test
Zicun Cong
Lingyang Chu
Yu Yang
J. Pei
19
0
0
01 Nov 2020
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Jiaxuan Wang
Jenna Wiens
Scott M. Lundberg
FAtt
9
87
0
27 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
14
172
0
08 Oct 2020
Causal Explanations of Image Misclassifications
Causal Explanations of Image Misclassifications
Yan Min
Miles K. Bennett
CML
11
0
0
28 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
25
73
0
24 Jun 2020
Time Series Forecasting With Deep Learning: A Survey
Time Series Forecasting With Deep Learning: A Survey
Bryan Lim
S. Zohren
AI4TS
AI4CE
31
1,184
0
28 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
Explaining Visual Models by Causal Attribution
Explaining Visual Models by Causal Attribution
Álvaro Parafita
Jordi Vitrià
CML
FAtt
62
35
0
19 Sep 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
323
4,203
0
23 Aug 2019
A Survey of Learning Causality with Data: Problems and Methods
A Survey of Learning Causality with Data: Problems and Methods
Ruocheng Guo
Lu Cheng
Jundong Li
P. R. Hahn
Huan Liu
CML
32
168
0
25 Sep 2018
A causal framework for explaining the predictions of black-box
  sequence-to-sequence models
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis
Tommi Jaakkola
CML
227
201
0
06 Jul 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,082
0
24 Oct 2016
Previous
12