ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.01625
  4. Cited By
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual
  Predictions of Complex Models

Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

3 November 2020
Tom Heskes
E. Sijben
I. G. Bucur
Tom Claassen
    FAtt
    TDI
ArXivPDFHTML

Papers citing "Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models"

27 / 77 papers shown
Title
Explaining the root causes of unit-level changes
Explaining the root causes of unit-level changes
Kailash Budhathoki
George Michailidis
Dominik Janzing
FAtt
33
4
0
26 Jun 2022
Explaining Preferences with Shapley Values
Explaining Preferences with Shapley Values
Robert Hu
Siu Lun Chau
Jaime Ferrando Huertas
Dino Sejdinovic
TDI
FAtt
18
6
0
26 May 2022
The Shapley Value in Machine Learning
The Shapley Value in Machine Learning
Benedek Rozemberczki
Lauren Watson
Péter Bayer
Hao-Tsung Yang
Oliver Kiss
Sebastian Nilsson
Rik Sarkar
TDI
FAtt
32
205
0
11 Feb 2022
Locally Invariant Explanations: Towards Stable and Unidirectional
  Explanations through Local Invariant Learning
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning
Amit Dhurandhar
Karthikeyan N. Ramamurthy
Kartik Ahuja
Vijay Arya
FAtt
30
4
0
28 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
33
399
0
20 Jan 2022
Socioeconomic disparities and COVID-19: the causal connections
Socioeconomic disparities and COVID-19: the causal connections
Tannista Banerjee
Ayan Paul
Vishak Srikanth
Inga Strümke
24
2
0
18 Jan 2022
Using Shapley Values and Variational Autoencoders to Explain Predictive
  Models with Dependent Mixed Features
Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Lars Henry Berge Olsen
I. Glad
Martin Jullum
K. Aas
TDI
FAtt
32
17
0
26 Nov 2021
Defining and Quantifying the Emergence of Sparse Concepts in DNNs
Defining and Quantifying the Emergence of Sparse Concepts in DNNs
Jie Ren
Mingjie Li
Qirui Chen
Huiqi Deng
Quanshi Zhang
23
31
0
11 Nov 2021
Causal versus Marginal Shapley Values for Robotic Lever Manipulation
  Controlled using Deep Reinforcement Learning
Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning
Sindre Benjamin Remman
Inga Strümke
A. Lekkas
CML
19
7
0
04 Nov 2021
RKHS-SHAP: Shapley Values for Kernel Methods
RKHS-SHAP: Shapley Values for Kernel Methods
Siu Lun Chau
Robert Hu
Javier I. González
Dino Sejdinovic
FAtt
26
16
0
18 Oct 2021
Explaining Algorithmic Fairness Through Fairness-Aware Causal Path
  Decomposition
Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition
Weishen Pan
Sen Cui
Jiang Bian
Changshui Zhang
Fei Wang
CML
FaML
27
33
0
11 Aug 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
Willie Neiswanger
39
65
0
23 Jun 2021
Rational Shapley Values
Rational Shapley Values
David S. Watson
23
20
0
18 Jun 2021
Decomposition of Global Feature Importance into Direct and Associative
  Components (DEDACT)
Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT)
Gunnar Konig
Timo Freiesleben
B. Bischl
Giuseppe Casalicchio
Moritz Grosse-Wentrup
FAtt
18
4
0
15 Jun 2021
Local Explanation of Dialogue Response Generation
Local Explanation of Dialogue Response Generation
Yi-Lin Tuan
Connor Pryor
Wenhu Chen
Lise Getoor
Wenjie Wang
27
11
0
11 Jun 2021
Accurate Shapley Values for explaining tree-based models
Accurate Shapley Values for explaining tree-based models
Salim I. Amoukou
Nicolas Brunel
Tangi Salaun
TDI
FAtt
16
13
0
07 Jun 2021
Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning
Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning
Jiahui Li
Kun Kuang
Baoxiang Wang
Furui Liu
Long Chen
Fei Wu
Jun Xiao
OffRL
27
60
0
01 Jun 2021
SHAFF: Fast and consistent SHApley eFfect estimates via random Forests
SHAFF: Fast and consistent SHApley eFfect estimates via random Forests
Clément Bénard
Gérard Biau
Sébastien Da Veiga
Erwan Scornet
FAtt
40
32
0
25 May 2021
Explaining a Series of Models by Propagating Shapley Values
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDI
FAtt
29
123
0
30 Apr 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and
  Practice
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
22
63
0
27 Mar 2021
The Shapley Value of coalition of variables provides better explanations
Salim I. Amoukou
Nicolas Brunel
Tangi Salaun
FAtt
TDI
27
5
0
24 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
48
48
0
20 Mar 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
148
665
0
28 Dec 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
53
243
0
21 Nov 2020
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
Jiaxuan Wang
Jenna Wiens
Scott M. Lundberg
FAtt
28
88
0
27 Oct 2020
Quantifying intrinsic causal contributions via structure preserving
  interventions
Quantifying intrinsic causal contributions via structure preserving interventions
Dominik Janzing
Patrick Blobaum
Atalanti A. Mastakouri
P. M. Faller
Lenon Minorics
Kailash Budhathoki
CML
15
9
0
01 Jul 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
52
371
0
30 Apr 2020
Previous
12